Are we achieving what we set out to achieve?
What is working? For whom and in what contexts?
Learning is key to why we evaluate and can help to improve current interventions by providing the evidence to make better decisions. Evaluation helps to gain a general understanding of what works, for whom and when, and generate examples for future policy-making. It can also help to develop evidence to inform future interventions.
The Scottish Government and the policies it delivers should be accountable and transparent. Evidence should be generated that can demonstrate an intervention’s impact or wider outcomes. Evidence of its effectiveness is also needed in response to scrutiny and challenge from public accountability bodies.
Types of evaluation
Evaluations use research methods to look at:
- whether policies have been implemented as intended. Whether the design is working; what is working more or less well and why? (process evaluations), or
- to measure whether outcomes have been achieved and how. An objective test of what changes have occurred, the scale of those changes and an assessment of the extent to which they can be attributed to the intervention. (impact evaluations) or
- a combination of process and impact evaluation or
- a comparison of the benefits and costs of the intervention, in order to assess if the benefits of the policy outweighed the costs. (Economic evaluations)
Evaluability Assessment is a systematic, transparent approach to planning evaluation. It involves structured engagement (usually involving collaborative workshops) with stakeholders, experts and policy makers to consider evaluation aims and approaches and to build a consensus for a particular evaluation approach. It typically results in a report providing options on how best to proceed with an evaluation
Previous discussion on evaluation
Evaluation of the work of REAREP was considered at a previous Programme Board meeting (see below). This discussion flagged the importance of collaboration, including wider partners, considering long term impact, and importance of measuring change in behaviour in schools.
As a board, it was agreed there is a need to collectively give some thought to how the work of the REAREP will be evaluated. It was suggested that groups like Young Scot and the reach they have across the country, could be utilised. Overall the programme won’t have an immediate impact and will be evaluated over a longer period of time. Work done on relationships and behaviours in schools requires a long term approach and is about changing culture and ideas. When evaluating impact, there is a need to be cautious and mindful of long term change. Considering how change has been measured in respect of behaviour in schools may be a useful example to consider. Research has been undertaken on this topic since 2004 and published every four years, providing a picture of how behaviour has changed in schools since the introduction of CfE.
Thinking about an evaluation of the group’s work/impact:
- what is the overall purpose of the evaluation?
- at a high level, What do we need to know?
- when do we need to know it?
Thinking about the evaluation approach and the role of the group:
- do you have any comments on the process (Evaluability Assessment) identified in this paper to work through how best the Race Equality and Anti-Racism in Education programme can be evaluated?
- how should the REAREP group take forward the approach? This could involve sub-groups or the whole group (or a combination)
- would contracting an external facilitator to take forward evaluation in collaboration with the group be beneficial?
There is a problem
Thanks for your feedback