Designing and Evaluating Behaviour Change Interventions

Easy-to-use guidance on designing and evaluating any behaviour change intervention using the 5-step approach

AN UPDATED VERSION OF THIS GUIDANCE IS AVAILABLE HERE http://www.gov.scot/Publications/2016/05/1967


Guidance for service providers, funders and commissioners

Step 5: Evaluate the logic model

Analyse your data to evaluate the project

Once you’ve collected some or all of your data you can use it to analyse whether or not your model is working as predicted. Analysis is not just a case of describing your data. You need to address the following questions:

  1. What does the data tell you?
  2. Why are you seeing these results (it could be because of your activities or external factors)?
  3. What are you going do about this? How can you improve the outcomes?

Nb. Although you should definitely carry out this process at the end of your project, earlier interim analysis and evaluation is also highly valuable in order to identify problems and improve your service on an on-going basis.

Who should carry out evaluations?

Don’t automatically assume that outside evaluations will be more helpful or reliable, nor that funders will necessarily view them this way.

As the next slide shows, there are advantages and disadvantages to both outside and internal evaluations. You should consider these carefully before deciding which approach is right for your organisation.

You may also want to consider commissioning outside expertise to support with particular stages of the evaluation (e.g. designing a data collection framework or reviewing existing evidence).

Whatever your decision, remember to budget for either internal evaluation or external expertise in your funding proposals. ESS provide further guidance on budgeting for self-evaluation:
http://www.evaluationsupportscotland.org.uk/resources/237/

Outside vs. Internal Evaluation

Self evaluation by staff member(s) Commissioning  outside evaluation
Advantages Advantages
  • Cheaper
  • 'In house' evaluators should have a clearer idea of your aims and project
  • Personal investment in improving the service
  • Easier to evaluate on an on-going basis and implement improvements continuously
  • Findings may be perceived as more reliable or less biased by some funders and other stake-holders
  • Evaluators trained in data collection and analysis
  • Offer an 'outsider' perspective
Disadvantages Disadvantages
  • Staff may lack the skills or time to carry out evaluations
  • Staff may feel pressured to report positive findings
  • May be perceived as less reliable by some funders
  • Outside evaluators are usually brought in at the end of a project, limiting ability to implement on-going improvements
  • May lack 'insider' knowledge about the project
  • May also feel pressured to report positive findings to those commissioning them

Testing the logic model - What does the data tell you?

Did the project work as it should have? The data you’ve collected will help to tell you whether your model worked as predicted, at each stage of the model. The following are examples of questions you might now be able to answer.

 

Inputs

Inputs

  • Which aspects of the service were/were not evidence based?
  • How much money was spent on activities? Was it sufficient?
  • How many staff were employed and at what cost?
  • What was staff/user ratio?
  • What did the staff do?
  • How many staff were trained?
  • What was the training?
  • Were there enough staff to deliver the activities as planned?
  • What other resources were required?

 

Outputs

Activities and Users

  • Who were the target group and was the intended target group reached?
  • What was the size of the target group/ their characteristics?
  • What were the activities/content?
  • How many participants were recruited? How successful were recruitment procedures?
  • How many of the target group participated, how many completed and how many dropped out?
  • How many sessions were held?
  • How long was an average session?
  • Did staff have the right skillset to deliver the content?

 

Outcomes

Outcomes

  • How many improved or made progress/did not improve or make progress?
  • What were the characteristics of the users who made progress?
  • What were the characteristics of the users who did not make progress?
  • What type of progress was make e.g. skills, learning?
  • Did users achieving short-term outcomes go on to achieve longer-term outcomes?

Analysing data in relation to outcomes

Analysing Outcomes: Evidence of Change

Outcomes are usually about change. You might be interested in changes in participants’ knowledge, behaviour needs or attitudes (depending on how your logic model predicted your project would work).

Because you are interested in change, it is not enough simply to observe or measure users after the intervention. Participants might display the desired behavior or attitudes after your intervention but you cannot be sure they didn’t already hold these views or behave in this way beforehand.

This is why it is so important that you collect data from the very start of your project. This enables you to compare users’ views, behaviour or knowledge before and after the project – giving you evidence of whether or not change has occurred. E.g. you could use a standardised questionnaire to measure users’ self–esteem before, during and after the project.

Limitation! Even when making comparisons in this way, you cannot be sure that your project caused these changes. There may have been other factors (see next three sections). Be honest about these limitations in your reporting.

Assessing your contribution to change

Explaining Outcomes: Assessing Contribution

Given the complexity of the social world, it is very unlikely that any single project can make a difference to people's behaviour on its own. Where change is evidenced in users (both positive and negative), it is likely that there are multiple causes for this and your project will only be a part of this.

Without using a randomised control trial (which as we have said is often impractical), it is very difficult to really measure the contribution of a single project. However, we can get a broad sense of the relative importance of the project and how it might have contributed to change, in conjunction with other influences

There are two key ways of doing this:

1. Subjective views on contribution
2. Identifying potential outside influences

Subjective views on contribution

Users, staff and other stakeholders are  valuable source s of evidence in order to assess the relative contribution of your project to observed changes in users, in relation to other influences. You can:

  1. Ask users whether they received other forms of support or influences on their behaviour?
  2. Ask users to rate the extent to which each form of help contributed to their success,  for example, did they say it was the project, their family, friends, another intervention or their own desire to succeed?
  3. Ask others who know the users (e.g. family, teachers, social workers) to rate the relative influence of the project on observed changes.

Limitation!
Asking users and staff to judge the influence of a project runs the risk of ‘self-serving bias’. This is the well-established tendency for people to take the credit for success and underplay external factors. One way to limit this tendency is to tell staff, users and other participants that you will be asking others to also assess the contribution of the project. Be honest about this limitation in your evaluation reports.

Identifying potential outside influences

By thinking about other potential influences, outside of your project, which might also have influenced behaviour change, you can put your own evidence into context.

Having identified potential influences, you may then be able to exclude or acknowledge whether they actually influenced your own users.

For example, in relation to the project on young women’s physical activity, potential influences you might consider are:

  • The weather – Unusually good or poor weather might have encouraged participation in the project and/or other kinds of physical activity.
  • Local facilities – The opening or closure of sports and leisure facilities might have encouraged or discouraged physical activity.
  • Economic conditions – Changes in employment or income levels for families could impact on user participation in the project and outside forms of physical activity (even if free – travel costs may impact).

Explaining negative or mixed outcomes

It is extremely unlikely that your data will show that your model worked as predicted for all users. Be honest about this. It is helpful to analyse users with poor outcomes (no change or negative change), as well as those showing positive outcomes. Use the data (and any other relevant information) to consider:

1. Are there any patterns in terms of who shows positive/poor outcomes?
E.g. Are there better outcomes according to gender, age or socio-economic group?
2. Can you explain these patterns through reference to the way the project was carried out?
E.g. Were activities better targeted at particular groups or likely to exclude others?
3. Are there any external factors which explain these patterns?
E.g. Do cultural norms or practical factors mean particular groups were always less likely to engage?

Remember!  Your project cannot explain everything. You are only ever contributing to change. This is true of both positive and negative outcomes. If your project demonstrate poor outcomes, you should analyse external factors as well as internal processes in order to explain them.

What can you do to improve?

The crucial next step in the evaluation process is to use your explanations of outcomes in order to improve your model.

  • Can you address any issues at the input stage (e.g. issues with staff training or resources)?
  • Should you extend activities which appear to have been successful?
  • Is it best to stop or redesign activities which the data suggests are ineffective?
  • Can you improve the model to better target groups with negative outcomes?
  • Can you do anything to address external factors which have negatively impacted (e.g. provide transport)?

Who needs to know about this?

Don’t keep your evaluations to yourself! They are important sources of evidence to various groups.

  • Funders will usually require an evaluation report in order to assess the contribution of a particular project (and their funding of it) to positive change. Remember, funders will also want to see evidence of a commitment to continual improvement. So be honest about difficulties and clear about future plans. Advice on producing evaluation reports can be found in Appendix 2.
  • Staff should ideally be involved in the production of evaluations (particularly at the stage of explaining outcomes and planning for improvement) and should certainly be informed of their findings. This will ensure everyone has a shared vision of how the project is working and how to improve their practice.
  • Other organisations, particularly those with similar aims, may be able to benefit from your evaluation findings in planning their own projects. Your evaluation contributes to the evidence base which others should review.

 

Judging the worth of an intervention (for funders)

How can the 5 Step Approach help funders to make their decisions?

Assessing an intervention

Funders can use the 5 step approach as a basis for assessing funding proposals for new interventions or deciding whether to provide continuation funding for existing interventions.

For all interventions, we suggest funders ask themselves:

  • Does the project have clear aims and a rationale for achieving these?
  • To what extent is the intervention based on strong and consistent evidence drawn from research studies?
  • Is there is logic model showing clear, evidence-based links between each activity and the outcomes?
  • Does the intervention include appropriate targets, including targets around the number of people who will engage with, participate in and complete the intervention?
  • Have evaluation questions been identified and is a plan in place to collect the necessary data to answer these questions?
  • To what extent did the evaluation show a) that the resources (inputs) and been spent on evidence-based activities, that b) the target group were obtained c) that most completed the intervention and d) that the anticipated outcomes for users were achieved?
  • Does the evaluation appear honest and realistic (i.e. are areas for improvement identified, as well as strengths and successes)?

For  existing interventions, we suggest funders ask themselves:

  • To what extent did the evaluation show a) that the resources (inputs) have been spent on evidence-based activities, b) that activities are clearly described and were delivered as intended, and c) that targets and anticipated outcomes were achieved?
  • Does the evaluation provide learning about ‘why’ the intervention has worked or not worked
  • Does the evaluation appear honest and realistic (e.g. does it highlight areas for improvement identified, as well as strengths and successes and does it acknowledge the external factors that may have impacted on any outcomes the intervention achieved)?
     
Potential checklist for behaviour change projects Yes, No, To some extent (Comments)
Are there clear aims and a rationale?  Why was the project needed?  
Was there a clear rationale for selection of target group?  
Is project content (what they are going to do) described in detail?  
Is there a thorough assessment of published research evidence?  
Is this evidence clearly embedded into the design of the project?  
Are there also evidence-based, or at least logical, links between inputs (costs), activities and short ,medium and long term outcomes?  
Has an appropriate evaluation been carried out?  
Has the logic model been tested through collection of relevant data?  
Did the evaluation show that resources were spent appropriately on activities with users?  
Is there evidence that activities were carried out and to a high standard?  
How many were eligible? What was the attendance/completion rate?  
Were predicted outcomes achieved?   
Is there a compelling case that the project made a contribution towards achieving outcomes?  

Advantages and disadvantages of the 5 step approach

Advantages

Inclusive –  all interventions of any size should be able to conduct this type of evaluation

Giving credit for evidence-based approach and a sound model of change can offset problems with conducting ‘gold standard’  impact evaluations

Funders can better assess the quality of proposals for new or existing interventions and make a more informed decision about the types of interventions to fund

A transparent and consistent scoring system would support and enable a process of ‘certification’ (similar to accreditation of  formal programmes) which could raise the quality of interventions which in turn should change behaviour in the long-term.

Encourages on-going evaluation and enables continual improvement

Disadvantages

Not everyone is familiar with logic models, how to embed the evidence or evaluations so evaluators and funders might need support

It falls short of a quantitative and objectively verifiable measures of impact on long term outcomes

In order for service providers to conduct a robust logic model evaluation, they must have sufficient time for medium term outcomes to materialise. Short funding cycles may act against this. Although this approach does allow other aspects of the process to be evidenced sooner, for example evidence-based practice, a clear logic model, sound implementation of activities and short term outcomes.

Contact

Email: Catherine Bisset

Back to top