The 5 Step Approach to Evaluation: Designing and Evaluating Interventions to Reduce Reoffending

Updated guidance on how to use the 5 Step approach to design and evaluate criminal justice interventions.

Step 5: Evaluate the logic model

Analysing your data to evaluate the project

Once you've collected some or all of your data you can use it to analyse whether or not your model is working as predicted. Analysis is not just a case of describing your data. You need to address the following questions:

  1. What does the data tell you?
  2. Why are you seeing these results (it could be because of your activities or external factors)?
  3. What are you going do about this? How can you improve the outcomes?

Nb. Although you should definitely carry out this process at the end of your project, earlier interim analysis and evaluation is also highly valuable in order to identify problems and improve your service on an on-going basis.

Who should carry out an evaluation?

Don't automatically assume that outside evaluations will be more helpful or reliable, nor that funders will necessarily view them this way.

As the next page shows, there are advantages and disadvantages to both outside and internal evaluations. You should consider these carefully before deciding which approach is right for your organisation.

You may also want to consider commissioning outside expertise to support with particular stages of the evaluation (e.g. designing a data collection framework or reviewing existing evidence).

Whatever your decision , remember to budget for either internal evaluation or external expertise in your funding proposals. ESS provide further guidance on budgeting for self-evaluation:

Outside vs. internal evaluation

Self evaluation by staff member(s)

Commissioning outside evaluation

Advantages Advantages
  • Cheaper
  • 'In house' evaluators should have a clearer idea of your aims and project
  • Personal investment in improving the service
  • Easier to evaluate on an on-going basis and implement improvements continuously
  • Findings may be perceived as more reliable or less biased by some funders and other stake-holders
  • Evaluators trained in data collection and analysis
  • Offer an 'outsider' perspective
Disadvantages Disadvantages
  • Staff may lack the skills or time to carry out evaluations
  • Staff may feel pressured to report positive findings
  • May be perceived as less reliable by some funders
  • Outside evaluators are usually brought in at the end of a project, limiting ability to implement on-going improvements.
  • May lack 'insider' knowledge about the project
  • May also feel pressured to report positive findings to those commissioning them

Testing the logic model

Did the intervention work as it should? Look back at the research questions and see what the data tells you about each question. The data (quantiative and qualiative) will tell you whether the service worked as the model predicted. The following are example questions you could answer using the basic monitoring data you collected.



  • Which aspects of the service were/were not evidence based?
  • How much money was spent on activities? Was it sufficient?
  • How many staff were employed and at what cost?
  • What was staff/user ratio?
  • What did the staff do?
  • How many staff were trained?
  • What was the training?
  • Were there enough staff to deliver the activities as planned?
  • What other resources were required?



  • Who were the target group was the intended target group reached?
  • What was the size of the target group/their characteristics?
  • What were their needs?
  • What were the activities/content?
  • How many referral protocols were set up and who with? How did it work? Did it work?
  • How many of the target group participated, how many completed and how many dropped out?
  • How many sessions were held?
  • How long was an average session?
  • Did staff have the right skillset to deliver the content?



  • How many improved or made progress/did not improve or make progress?
  • What were the characteristics of the users who made progress?
  • What were the chaacteristics of the users who did not make progress?
  • What type of progress was make e.g. skills, learning?

Explaining outcomes: Assessing contribution

Given the complexity of the social world, it is very unlikely that any single project can make a difference to people's behaviour on its own. Where change is evidenced in users (both positive and negative), it is likely that there are multiple causes for this and your project will only be a part of this.

Without using a randomised control trial (which as we have said is often impractical), it is very difficult to really measure the impact of a single project on outcomes, especially long term outcomes such as reoffending. However, we can get a broad sense of the relative importance of the project and how it might have contributed to change, in conjunction with other influences

There are two key ways of doing this:

  1. Subjective views on contribution
  2. Identifying potential outside influences

Subjective views on contribution

Users, staff and other stakeholders are valuable source s of evidence in order to assess the relative contribution of your project to observed changes in users, in relation to other influences. You can:

1) Ask users whether they received other forms of support or influences on their behaviour?

2) Ask users to rate the extent to which each form of help contributed to their success, for example, did they say it was the project, their family, friends, another intervention or their own desire to succeed?

3) Ask others who know the users (e.g. family, teachers, social workers) to rate the relative influence of the project on observed changes.


Asking users and staff to judge the influence of a project runs the risk of 'self-serving bias'. This is the well-established tendency for people to take the credit for success and underplay external factors. One way to limit this tendency is to tell staff, users and other participants that you will be asking others to also assess the contribution of the project. Be honest about this limitation in your evaluation reports.

Identifying potential outside influences

By thinking about other potential influences, outside of your project, which might also have influenced behaviour change, you can put your own evidence into context.

Having identified potential influences, you may then be able to exclude or acknowledge whether they actually influenced your own users.

For example, in relation to a project to improve the family relationships of female ex-prisoners in the community, potential influences you might consider are:

  • Outstanding warrants - If some of the women were re-arrested on outstanding charges this will have hindered participation
  • Child protection issues - Concerns around the safety and well-being of children may have prevented practitioners from working with some families.
  • Economic conditions - Changes in income levels for the women could impact on user participation in the project in terms of travel costs

Explaining negative or mixed outcomes

It is extremely unlikely that your data will show that your model worked as predicted for all users. Be honest about this. It is helpful to analyse users with poor outcomes (no change or negative change), as well as those showing positive outcomes. Use the data (and any other relevant information) to consider:

  1. Are there any patterns in terms of who shows positive/poor outcomes?
    e.g. Are there better outcomes according to gender, age, socio-economic group, offence type?
  2. Can you explain these patterns through reference to the way the project was carried out?
    e.g. Were activities better targeted at particular groups or likely to exclude others?
  3. Are there any external factors which explain these patterns
    e.g. Do cultural norms or practical factors mean particular groups were always less likely to engage? For example women not engaging with drug services for fear of losing their children?

Remember! Your project cannot explain everything. You are only ever contributing to change. This is true of both positive and negative outcomes. If your project demonstrate poor outcomes, you should analyse external factors as well as internal processes in order to explain them.

What can you so to improve?

The crucial next step in the evaluation process is to use your explanations of outcomes in order to improve your model.

  • Can you address any issues at the input stage (e.g. issues with staff training or resources)?
  • Should you extend activities which appear to have been successful?
  • Is it best to stop or redesign activities which the data suggests are ineffective?
  • Can you improve the model to better target groups with negative outcomes?
  • Can you do anything to address external factors which have negatively impacted? E.g. provide transport

Who needs to know about this?

Don't keep your evaluations to yourself! They are important sources of evidence to various groups.

  • Funders will usually require an evaluation report in order to assess the contribution of a particular project (and their funding of it) to positive change. Remember, funders will also want to see evidence of a commitment to continual improvement. So be honest about difficulties and clear about future plans. Advice on producing evaluation reports can be found in appendix 2.
  • Staff should ideally be involved in the production of evaluations (particularly at the stage of explaining outcomes and planning for improvement) and should certainly be informed of their findings. This will ensure everyone has a shared vision of how the project is working and how to improve their practice.
  • Other organisations particularly those with similar aims, may be able to benefit from your evaluation findings in planning their own projects. Your evaluation contributes to the evidence base which others should review.


Back to top