What Works to Reduce Reoffending: A Summary of the Evidence

This is an updated version of the original review entitled ‘What Works to Reduce Reoffending: A Summary of the Evidence’, published in 2012.


CHAPTER FOUR: CRITICAL ASSESSMENT OF THE 'WHAT WORKS' LITERATURE, AND PROPOSALS FOR FUTURE RESEARCH

Following on from the evidence regarding reducing reoffending as described in Chapters Two and Three, this chapter presents a critical assessment of the evidence, and suggests some areas which may be fruitful avenues for future research.

Critical assessment of the 'What Works' literature

Due to research limitations, in the vast majority of cases it is not possible to know whether the effect of reduced reoffending was directly caused by a particular intervention (as explained in Chapter 1). The above review of the evidence shows that some criminal justice interventions are associated with reductions in reoffending. This temporal association should not, however, be misinterpreted as causality: in the vast majority of cases, it is not possible to say whether the effect of reduced reoffending was directly caused by a particular intervention. The primary reason for this is that most evaluations of criminal justice interventions, especially in Europe, use, in the best of cases, vaguely defined or loosely comparable comparison groups, and in the worst, no comparison group at all. This lack of robust comparison group designs substantially weakens the internal validity of evaluation findings (i.e. the extent to which we can infer the effect was caused by the intervention), and raises the possibility that change is the product of selection effects: offenders participating in programmes are likely to differ in important ways from non-participants, for example they might be more motivated to change, and these unique characteristics, rather than the intervention, may have made them less likely to reoffend in the first place[489].

It is difficult to generalise results from "gold-standard evaluations" such as randomised controlled trials to everyday criminal justice settings. Even studies that attempt to ameliorate the problem of selection effects outlined above by employing randomly assigned comparison groups (i.e. randomised controlled trials) suffer from other problems, specifically low external validity which means that a generalisation of results to other settings is hard to make. This has led some researchers to conclude that 'gold-standard evaluations' are often the least suitable for informing practice, mainly because they are usually conducted in quite unique conditions (for example delivered by intensively trained and highly motivated staff) that differ from those that operate in everyday criminal justice settings[490]. This is sometimes known as the "efficacy" versus "effectiveness" debate. Hough (2010) highlights the particular difficulties of a transferring a pharmaceutical evaluation model to criminal justice settings[491]. In the former, 'efficacy' demonstrated through randomised controlled trials, can be reasonably assumed to translate fairly well into 'effectiveness' when delivered in "real life" health settings. In contrast, generalising from trials of criminal justice interventions is more problematic given the number and complexity of the variables involved. Hough therefore concludes that randomised control trials are valuable in demonstrating what can work but should only be a first step in an evaluation process which must then analyse the mechanisms through which such programmes succeed or fail for different individuals. As McGuire[492] argues, a finding that an intervention worked based on a well-designed clinical trial provide little information about whether it will do so when tested in more challenging conditions such as the overcrowded prison or hard-pressed social work office and with fewer resources available. Andrews and Bonta[493] reported that the effectiveness of treatment delivered in the real world is about half of the effect of the experimental, demonstration program.

Similarly, Sampson suggests that different processes may operate at 'micro' (e.g. individual) and 'macro' (e.g. society) levels which cannot be accounted for by randomization[494], and so important factors affecting reoffending may not be able to be tested via randomization. Moreover, the aims of assessing whether a particular programme worked and whether a policy based on a study will work are not the same, as turning the results of a study into policy involves a process of implementation[495]. Implementation always involves encountering a number of different contexts, unintended consequences and working with people who are interdependent and can choose to accept or reject 'treatment'.

Studies focusing on recidivism as an outcome measure may be ill-equipped to measure desistance. McNeill et al. raise questions about the evaluation of 'what works' programs using a single measure of recidivism[496]. They suggest that different types of evidence are required to explore different facets of community corrections, and that recidivism studies focus too narrowly on a single aspect of rehabilitation programmes[497]. This may be especially true for holistic interventions[498] which, by definition, work on a number of levels at the same time which can complicate what is considered 'success'. Methodologically, it has been claimed that recidivism is a poor measure of desistance[499]. More broadly, McNeill et al. contend that interventions are best understood as supporting rather than producing change and that for change to happen, ex/offenders require motivation, capacity (or human capital) and opportunity (social capital)[500]. Interventions, especially those based on RNR principles, focus only on the capacity/human capital element of desistance.

Researchers increasingly advise that evaluations should focus not only on what works, but also on how and why it is expected to work. If even the most robust studies such as randomised controlled trials suffer from limitations that preclude safe conclusions about their effectiveness in everyday criminal justice settings, where does this leave us in terms of using evidence to inform practice development? Acknowledging the limitations of evaluation research designs, researchers are increasingly arguing that instead of overly focusing on outcome evaluations to assess "whether" an intervention works or not, it is equally, if not more, important to examine "how" and "why" it is expected to work and which aspects of it made a difference for offenders[501]. This would include assessing whether the intervention has a robust theory of change, is implemented to best practice standards and is effectively targeted at the right people. To take account of these issues, Justice Analysts in Scotland have produced guidance for funders and service providers on developing and evaluating theories of change using the evidence-base and logic models[502].

Directions for future research

Evaluations should incorporate more high quality user feedback on why an intervention worked or not. One of the key messages emerging from the above review of the literature is that desistance from offending is a highly individualised process and offenders can reach this outcome through a number of different paths. To improve our understanding of how offenders change and, therefore, how criminal justice practitioners can best support and accelerate the desistance process, it is important to incorporate more high quality user feedback into research designs and get offenders' views on what helped or hindered them in giving up crime.

More studies investigating the process of desistance are needed in Scotland. There would also be merit in replicating desistance studies like the ones reviewed in Chapter Three in Scotland. This would ideally involve following up cohorts of offenders to gather evidence on triggers, facilitators and obstacles for the transition away from crime. This type of research would need to take into account that desistance pathways are likely to differ among sub-populations of offenders (e.g. females, young people) which should, therefore, be examined separately[503]. In particular, there is a lack of research into female desistance from crime. It may also be useful to further examine the ways in which concepts central to desistance, such as identity, can be measured in practice[504].

Further research is required into the effective implementation of interventions. There can be a large discrepancy between the effectiveness of CBT interventions in demonstration projects and in the field[505]. The reasons why this may be, and the factors affecting sound implementation, are important areas for further research to bridge the gap between theory and practice. This should include work into the implementation of interventions in Scotland, given the distinctive nature of the Scottish justice system.

Further work is required on the impacts of strengths-based programmes. Given the current debate about the relative merits of strengths-based interventions such as GLM in comparison to risk-based interventions based on RNR, further work is necessary to evaluate the impacts of strength-based programmes in practice. Evaluations which consider outcomes as well as process would be especially useful to inform policy-makers as to their respective merits.

Increased use of more sophisticated methodologies in evaluations and desistance research. Methodologies used to measure desistance need to better reflect that desistance is a complex process rather than a single event. It follows that evaluations need to develop tools that are able to measure the extent to which users are achieving intermediate outcomes which can capture progress (or lack of progress) over time and combine this other research methods which can highlight factors which either support or inhibit the achievement of outcomes. The wider use of observational research would also help to map the nuances of the desistance journey and the experience of interventions, providing richer data on what helps and what hinders desistance.

Contact

Email: Justice Analytical Unit

Back to top