We are testing a new beta website for gov.scot go to new site

Effective Interventions Unit Evaluation Guide 12: Intensive interventions with young people

DescriptionThis evaluation guide considers how to plan an evaluation and concentrates on identifying appropriate evaluation questions and outcome measures.
ISBN
Official Print Publication Date
Website Publication DateDecember 16, 2003

    Listen

    Effective Interventions Unit Evaluation Guide 12:
    Intensive interventions with young people

    This document is also available in pdf format (148k)

    WHAT IS THE PURPOSE OF THIS GUIDE? This is the twelfth evaluation guide in the EIU evaluation series. It follows EIU's research and guidance on working with young people, particularly 'Services for Young People with Problematic Drug Misuse: A guide to principles and practice'. This evaluation guide considers how to plan an evaluation and concentrates on identifying appropriate evaluation questions and outcome measures.

    WHO SHOULD READ IT? Anyone involved in commissioning, planning, delivering and evaluating services working with young people with substance misuse problems, including generic services.

    Who wrote it? This guide was prepared by the EIU working with Nicola Richards, Programme Manager of the Partnership Drugs Initiative, Lloyds TSB Foundation for Scotland. We WOULD LIKE TO THANK MARK BITEL OF Partners in Evaluation ( http://www.pievaluation.co.uk) FOR HIS HELP AND EXPERTISE IN PREPARING THIS GUIDE.

    Evaluating an intervention with young people should follow the same basic principles as any evaluation. The EIU has produced a series of short evaluation guides that cover topics from definitions and planning through to reporting and dissemination. These are available at: http://www.drugmisuse.isdscotland.org/eiu/eiu.htm. Guide 1 defines an intervention as: a policy, programme, service or project designed to bring about specified change to target areas or groups.

    An evaluation is a structured process to identify whether the stated aims and objectives of an intervention have been met. Evaluation doesn't need to be a complicated process and can be part of the internal reflection of an organisation but it does need to be systematic. Effective evaluation depends upon having a very clear idea of what the intervention is aiming to achieve. An evaluation involves the collection and analysis of reliable and relevant data and considers the context and process of implementation and the outcomes achieved.

    The process of implementing an intervention and the impact that this has on clients are often called outputs and outcomes. Outputs are the activities involved in delivering the service. For example, staff contacts with clients as part of an outreach service, leaflets and posters providing information on services. Outputs are often measures of 'throughput' and so can be distinguished from outcomes. For example, a service may see a large number of clients: the counseling offered is the immediate output but the outcome should be improved state of health. Outcomes are the results of the intervention. Outcomes can be 'soft' (e.g. improvements in self-esteem and family relationships) as well as 'hard' (e.g. getting into college). There may be a difference between the intended outcome, as set out in the project objectives, and the actual outcome, established through a process of evaluation and monitoring.

    Planning an evaluation

    When planning a service level evaluation of an intervention the following issues will need to be addressed:

    The intervention

    • Does the intervention have clear and measurable objectives?

    • Why and how can the intervention be expected to achieve its objectives?

    • Who are the stakeholders? e.g. clients and their families, volunteers, staff, partner organisations, funders?

    • What exactly does the intervention involve?

    • Is accurate monitoring information systematically collected?

    The evaluation

    • Who will use the results of the evaluation?

    • Who will the evaluation involve?

    • Are there sufficient resources - in terms of funding, staff time and skills?

    • Are there ethical or legal issues to address? For example, client consent and data protection registration

    • What questions does the evaluation need to answer?

    • What methods will the evaluation use?


    Before you start…. identify what an intervention aims to achieve

    Before embarking on an evaluation, it is crucial that the intervention is clearly planned and well-founded. The evaluation will only be as robust as the intervention that it studies: if the stated objectives don't accurately reflect the purpose and direction of the intervention then the evaluation will not accurately represent the intervention. The 'Weaver's Triangle' can help you to set out clearly and concisely what it is you want to achieve and the activities that you believe will make this change happen. The example opposite is for an intensive service working with young people with drug and alcohol problems but the same process can be applied to any level of intervention.

    Evaluating outcomes

    Evaluation question

    Possible methods/data sources

    • Have young people improved their educational performance?

    • record levels of attendance at school/college at start and end of intervention.

    • record educational attainment levels at start and end of intervention.

    • get client views of their school/college or employment options as part of action planning and review these periodically.

    • with client consent, phone parents/carers/teachers to gauge their views of client progress.

    • Has family functioning improved?

    • through discussion with the client, family and the referrer make an assessment of the level of family functioning, e.g. is the child on the child protection register or accommodated by the local authority? Is the client able to identify key adults that provide a stable and supportive influence? Record these indicators at the start and then review progress.

    • Has the harm of drug / alcohol use declined?

    • ask clients to self-report their level of use at start and end of the intervention.

    • record incidence of risk behaviour, such as sharing equipment, bingeing, poly-drug use, or using in risky situations or with inappropriate peers.

    • Has there been a decline in criminal / anti-social activity?

    • record levels of contact with police.

    • keep a record of self reported levels of criminal activity in the past month.

    • through discussion with client, family and referrer assess level of activity.

    • Have self confidence and life skills increased?

    • through discussion with client assess levels of confidence and self esteem factors. Consider indicators such as appearance, willingness to engage with others etc.

    • some validated tools can give an assessment of self esteem and skills, for example the Rickter Scale.

    • Have the young people increased their involvement in mainstream services

    • record the services used by the client at start and end of the intervention.

    • get client perceptions of other services to identify possible barriers.

    • identify if acting as an advocate or accompanying the young person to other services increases take up by offering this service to a random selection of your clients. By comparing it to the average rate you can identify if this has a positive impact.

    The sustainability of the outcomes should be taken into account in drawing conclusions. It can be helpful to build in follow up questions with clients, referrers or parents/carers to check out how things are progressing 3 or 6 months after the end of contact and to get their views on the impact of the intervention. It may be difficult to form a view as to the extent to which the outcomes are a direct result of the intervention. Possible ways of approaching this may be to compare the outcomes with those of other similar interventions or with a control group of people similar to the clients of the project, who did not receive support - perhaps drawn from a waiting list.

    It can seem difficult to measure changes in clients but as the American evaluation expert Susan Philliber says 'if you can see it, smell it, taste it or feel it, you can measure it!', it is just a matter of identifying a way of doing it.


    Identifying evaluation questions and methods

    Completing a Weaver's Triangle helps to focus attention on those areas of activity that are of most significance to the intervention. From this it should be possible to generate the questions that the evaluation will need to answer covering outcomes and operational outputs.

    graphic

    MONITORING AND EVALUATING OUTPUTS

    The outputs detailed in the bottom layer of the Weaver's Triangle should be possible to count or simply record in some way as part of the services monitoring procedures. The questions will be things like 'How many? Who? When? What range? How often?' This kind of monitoring information is often collected for funders but it is invaluable to the people that deliver the service. By putting time aside to review the findings on a regular basis, such as at steering group or staff meetings, it is possible to evaluate what the monitoring information tells you about the operation of the service. For example, did clients receiving high intensity support have better outcomes than those that didn't? The tables below highlight how routine monitoring information is relevant to the evaluation of the aims and effectiveness of the service.

    Evaluation question

    Possible methods/Monitoring data sources

    Relevance to aims/potential learning from evaluation

    • Has the project received the expected number of referrals?

    • Has the profile of the young people been as intended?

    • Count referrals.

    • Count how many referrals in each age band.

    • Categorise different levels of risk behaviour / substance misuse (e.g. a 1-5 scale) and assign each referral a risk rating. Count how many young people fall into each group.

    This information will identify if the service is reaching the young people that it is designed to help i.e. those young people with serious substance misuse problems. If the clients have less severe problems or are older/younger than anticipated then the accessibility of the service, referral procedures or criteria may need to be reviewed. The results may identify that a new needs assessment exercise is necessary.

    • Have suitably qualified and experienced staff been appointed and retained?

    • Compile information on staff skills and experience.

    • Review level of staff turnover.

    • Conduct exit interviews with staff to find out why they leave.

    • Interview staff to find out what keeps them in the job.

    Research evidence shows that having skilled and experienced staff who are able to build sustained relationships with young people has a positive impact on client outcomes. An audit of skills will allow training needs to be identified and cases allocated to the most appropriate worker. High levels of staff turnover may indicate that the service needs to review staff support mechanisms.

    • Has the caseload been appropriate?

    • Where and when do staff tend to meet with clients?

    • Record number of active cases per worker.

    • Record level of intensity of contacts e.g. number of hours.

    • Record travelling time.

    • For a set period (e.g. one month) map when and where client contacts take place.

    This information will help to improve service planning e.g. it will be possible to set more accurate targets for the number of referrals that can be accepted.

    Mapping where and when workers meet with clients can help improve efficiency e.g. by changing working hours / opening times; grouping appointments or identifying partner agencies in specific localities.

    • How long do clients stay in contact with the service?

    • Do clients drop out of the programme? If so why and when?

    • Review the start / end dates of working with clients to identify the average length of contact.

    • Follow up clients to ask them why they left or go back to referring agencies or individuals to ask if they can provide any feedback.

    Mapping drop out patterns can help to identify weaknesses e.g. if clients routinely drop out after assessment then this may indicate that the initial assessment is off putting or that inappropriate referrals are being accepted. Combining information about drop outs with outcome data could help identify whether decreasing contact time has an impact on the effectiveness of the service. Qualitative work with clients may show that one session met their needs. If so a lower threshold service may be more appropriate.

    • Do clients find the programme acceptable? What improvements would they suggest?

    • Use questionnaires, a suggestion box or graffiti wall.

    • Have someone independent of the service (perhaps another young person) interview a random selection of clients.

    Getting young people's views of the intervention will help identify how to make it more accessible and acceptable. It may help to clarify areas that are problematic for clients, such as opening times or confidentiality procedures, and generate solutions.

    Client Outcomes

    Identifying outcomes achieved for clients can seem a challenge, particularly when the progress is in the form of 'soft' - but important - outcomes such as improvements in self esteem and confidence. This information is absolutely critical to identifying the effectiveness of an intervention and it doesn't have to be complex to collect. For example, scaling questions - a simple list of questions that are given a marking out of 5 or 10 - can give an indication of client progress if used on a routine basis at the start, during and end of an intervention. Some agreed criteria or training across the staff team will help to ensure that scores are allocated consistently. Existing validated tools can be used or questions can be devised specifically for a service.

    An example of an individual outcome progress plan

    diagram

    By aggregating information (presenting data as totals) collected on the progress achieved by individual clients it is possible to identify service-wide outcome measures. The outcome record sheet above allows the particular goals to fit the needs of individual clients but groups them under common outcome headings. This means that the outcome progress plans for all clients can be analysed, the scores out of 10 counted up and averaged, and put into a graph like this:

    chart

    Effective Interventions Unit
    Substance Misuse Division
    Scottish Executive
    St. Andrew's House
    Edinburgh EH1 3DG
    Tel: 0131 244 5117 Fax: 0131 244 2689
    EIU@scotland.gsi.gov.uk
    http://www.drugmisuse.isdscotland.org/eiu

    Using client data

    A structured assessment and review process, where findings are recorded in a simple and clear way, is enormously helpful when evaluating client outcomes. The main purpose of assessment is to identify the individual's needs and aspirations so that they receive the appropriate intervention. However, the assessment process can provide a wealth of information which can potentially be used as part of the evaluation. This will be easier to use if particularly important information is recorded in a standard way across clients - for example, on a sheet in the front of the client file - rather than in the middle of narrative in the case notes. An important consideration for service providers and evaluators will be how to use the data which has been collected as part of the evaluation process. 'Services for Young People with Problematic Drug Misuse - A guide to principles and practice' sets out some of the issues to be considered in terms of computerising, analysing and interpreting the data.

    Data protection - The Data Protection Act 1998 gives individuals the right to gain access to information on themselves, held on computer or paper. It also imposes on data users a number of obligations including the eight Data Protection Principles that say that data must be:

    • fairly and lawfully processed;

    • processed for limited purposes;

    • adequate, relevant and not excessive;

    • accurate;

    • not kept longer than necessary;

    • processed in accordance with the data subject's rights;

    • secure;

    • not transferred to countries without adequate protection

    Information generated by an assessment is likely to fall into the category of 'sensitive personal data'. This includes data that relates to the physical or mental health of data subjects. To lawfully process sensitive data and to ensure that the processing of information conforms to standards of 'fairness' particular conditions have to be met, such as obtaining the explicit consent of the person it relates to. This can be done relatively simply by explaining the typical flows of information and likely uses of data at the outset through a standard information leaflet or letter. For example, new clients should be given information about the uses and disclosures of personal data and could be advised that their records may be made available to researchers who may wish to contact them in the future. If information is used for additional purposes it will need to be explained to the individual at the appropriate time and when they are able to make sense of it. Consent may not be required if the information is anonymised and used at a service level rather than to make judgements about individual cases.

    Organisations must notify the Data Protection Register about any data held on computer. It should be recorded that data may be used for research purposes. Forms can be completed online at: http://www.dpr.gov.uk.