Digital appraisal manual for Scotland: guidance

Guidance to help embed best practice appraisal and evaluation within policy making relating to digital projects.


Post appraisal

Once the preferred intervention proposal is selected using DAMS and implementation is underway, it is necessary to begin the post-appraisal process of monitoring and evaluation. Monitoring and evaluation assesses the performance of the intervention against the appraisal and is an important way of identifying lessons that can be learnt to improve the design and delivery of future digital interventions in Scotland.

It is important to plan and take account of post-appraisal resource and budgetary provisions throughout the appraisal process. All digital proposals must undertake proportionate monitoring and evaluation.

Monitoring

Monitoring is the collection of data (e.g. management information), both during and after implementation to improve current and future decision making. Collecting and analysing data during the implementation phase allows for any potential changes where possible if the proposal is off-mark in meeting its objectives. Monitoring the costs and benefits during and after implementation is also necessary for management and transparency.

As part of the DAMS report produced during the appraisal stage, a monitoring plan should be developed to outline how monitoring will be undertaken following implementation. This should account for data and information that is available (such as published data) and also information that can be procured in future to aid the monitoring (and later evaluation), such as surveys. After identifying the available data/intel for use in monitoring, a set of key performance indicators (KPIs) should be developed to help provide ongoing updates on the outcomes of a project. KPIs should be clearly linked to the proposal’s SMART objectives, DAMS criteria and established policy directives.

Examples of KPIs:

  • financial, such as money saved/ revenue generated
  • related to digital, e.g. % population access, take-up rates

Effective monitoring requires regular analysis of the information being gathered in order to continuously review the performance of the project against the established objectives, the DAMS criteria and the impacts of the project on established policy directives. Used in this way, monitoring should identify any areas of under-performance, and should also identify factors causing under-performance, thus allowing practitioners to implement appropriate changes at an early stage

The project manager should develop a monitoring report that reflects the proposed monitoring plan developed as part of a DAMS. The levels of effort and expenditure required to monitor a project will vary. There are a range of factors which should be considered when determining the appropriate level of effort and expenditure for a particular project, including the level of resources available (both in terms of time and finances), the scale of the project, the degree of innovation of the project and the degree of risk exposure associated with adverse outcomes and the quality/robustness of the monitoring outcome.

Evaluation

Evaluation is the methodical assessment of the intervention’s design, implementation and outcomes. The evaluation should assess the short, medium and longer-term outcomes of the intervention and set out how best the intervention met its SMART objectives. It is the evaluation stage where the benefits of the intervention to society are realised.

The evaluation should test:

  • if or how far an intervention is working or has worked as expected
  • if the costs and benefits were as anticipated
  • whether there were significant unexpected consequences
  • how it was implemented and, if changes were made, why

The importance of monitoring and data collection throughout the lifespan of the intervention is particularly important for evaluation. The DAMS report should include an evaluation plan for the project, such as the scope, timing and frequency of evaluation. An evaluation is always undertaken against indicators and targets derived from the objectives, DAMS criteria and relevant established policy directives for a particular project. It involves comparisons against a baseline (such as a “do nothing” or “do minimum” baseline) to establish the actual outcomes relative to those that would have happened anyway, or with minimal intervention. It is essential that evaluation is considered at the outset of the business case process – particularly where there is a requirement to collect time-sensitive baseline information (for example, surveying before a policy or intervention comes into effect).

Given the relative immaturity of digital as a policy area, thorough evaluation is essential as we develop our understanding of the impact of interventions in this sphere and bolster our evidence base.

Evaluations can look at both the process with which an intervention has been implemented (process evaluation) and the outcomes of the intervention (outcome evaluation).

  • process evaluation is particularly useful in the early stages of implementation when there is scope for amending a project to make it more efficient or effective. Process evaluation should also highlight issues for the future outcome evaluation, including the extent to which the information being produced by the monitoring process is likely to be adequate for subsequent outcome evaluation
  • outcome evaluation should look for clear and measurable outcomes from the project. The timing of an outcome evaluation needs to be carefully programmed. If undertaken too soon, final impacts may not have had time to ‘work through’, but if undertaken too late, resources will be wasted if the project is not efficient or effective. As with appraisal, size and level of detail of evaluation should be proportionate to the size and cost of the project

The evaluation will use data, information and KPIs from the monitoring stage as well as bespoke analysis of the intervention over time. This will likely involve qualitative and quantitative sources, such as relevant data, literature, surveys and econometric analysis. It should also include consultation with key stakeholders.

Any evaluation report should show whether a project represents a good use of resources, whether value for money could be improved, and, if so, how best to achieve this.

For more detail and guidance on monitoring and evaluation, see the UK Government Magenta Book, which sets out best practice in these areas.

Feedback

Post-evaluation, it is necessary to feed back on the process itself, as well as on any evidence gathered as part of the process. This can be used to refine the DAMS process itself, contribute to the development of the technical database and inform future policy making.

Please contact DAMS@gov.scot with any feedback or questions.

Contact

DAMS@gov.scot

Back to top