Child protection learning and development 2024: national framework

Framework to support multi-agency learning and development relevant to child protection in Scotland. This highlights key learning for all workforces according to the level of responsibility they have for child protection.

This document is part of a collection

Appendix C: Evaluation

1. REACTION - Have learners found the learning activity relevant, engaging and useful?

2. LEARNING Have learners aquired the knowledge, skills and confidence the learning activity is focused on?

3. BEHAVIOUR Are learners applying learning as they do their job?

4. RESULTS Have targeted outcomes from learning activities been met?

The Kirkpatrick Evaluation Model identifies four stages of evaluation activity (see right). The stages are progressive, moving from a focus on the specific learning activity to the impact of the activity on practitioner performance and wider outcomes. Each successive stage requires different kinds of evaluation activity and more rigorous and potentially time-consuming analysis, therefore learning and development practitioners need to consider the purpose and depth of evaluation required.

Full implementation of the Kirkpatrick Model may serve better for in-depth analysis of the effectiveness of learning, for example in a Learning Needs Analysis.


This stage is about measuring learners' responses to a particular learning activity. Reactions are usually captured through some kind of survey immediately following a training session or use of other learning materials. Depending on the setting, this could be a paper evaluation form or an online survey.

Paper evaluations can be useful for evaluating classroom-based training sessions. They tend to generate the most responses, as the trainer can encourage all attendees to complete the form before leaving the training venue. For analytical purposes, individual responses will need to be manually collated, which takes time and risks inaccuracies in the transfer of data.

Online forms can be useful in both classroom-based and online training sessions, and for asynchronous learning activities. An advantage of online forms is the automatic collation of responses, preserving data integrity, and (in some applications) the real-time production of graphs and other analytical information (e.g. MS Forms). Response rates vary – may be higher in number if done immediately but can also be lower number post-learning event as completion becomes a lower priority for learners.

Whichever format is used, even a short Reaction Evaluation can provide a rich source of quantitative (Likert Scales, Multiple-choice) and qualitative (free text) information.

  • Likert Scales and multiple-choice questions: provide aggregate information, easily presented in visual form, identify patterns and trends in learner reactions
  • free text responses provide more detailed information about learner experience and where improvements can be made.

Who the evaluation information is for, and the use they will make of it, will determine the exact questions asked, but it is helpful to consider three themes:

  • Relevance: to what extent did learners find the subject matter and level of material relevant to their work? Questions under this theme could include: specific learning outcomes; general questions about the relevance to the participant; participant application to practice. For example:
    • on a scale of 1-5, where 1 is not at all, and 5 is completely, how relevant was [this learning activity] to you in your role? (Likert Scale)
    • to what extent did [this learning activity] meet [Learning Objective 1]? Not at all, Partially, Completely. (multiple choice options)
    • how will you use what you have learned in your day-to-day work? (free text input)
  • Engagement: to what extent did learners feel actively involved with the learning activity? Questions under this theme could include: views on the quality of materials, number and nature of activities, extent to which learners felt challenged by the material, or accessibility of the learning activity. For example:
    • was there a good balance of listening and activities? Yes; Too many activities, Too much listening. (multiple choice options)
    • on a scale of 1-5, where 1 is not at all, and 5 is completely, how easy was it for you to [read/attend/use] [this learning activity]? Likert Scale
    • which aspects of [this learning activity] resonated with you? (free text)
  • Satisfaction: to what extent are learners happy with their learning experience? You can design similar responses to this aspect of the learning as with Relevance and Engagement.


This stage begins to explore the extent to which learners acquired and retained the knowledge, understanding, skills and values that the learning activity was focused on. An example of how this might be done through follow up survey is below:

  • Follow up electronic surveys/self-report questionnaire options:
    • 'Give an example of how you have applied the learning from (learning activity/course)?'
    • 'What 3 things do you most remember from this course?'
    • 'What has been the most relevant for your knowledge or skills?'
    • 'How much did this learning increase your knowledge or skills?'
  • as above but by follow up though an individual call or small group practice development forum
  • manager or supervisor feedback

Stage 3: BEHAVIOUR (transfer of learning to practice)

This stage explores the changes to learners' practice following completion of the learning activity, exploring whether people are applying what they have learned to their work. The results of evaluations at this stage will provide information on whether training has been understood by learners, seen as appropriate to their role and has impacted on how they practice.

To enable an effective measure of the impact of learning activities on learners' behaviour, this type of evaluation should be conducted 3-6 months following completion of the learning activity. Judging the timeframe for this requires some time for learners to embed changes in practice/behaviour.

This stage may represent the truest assessment of the effectiveness of any learning activity or programme. However, there are many personal, structural and organisational factors that influence learners' ability to transfer learning to practice. When evaluating at this stage, it is important to consider what other factors may have facilitated or created barriers to change.

Effective ways to measure changes in practitioner behaviours include:

  • practitioner interviews or peer group discussions
  • case file audits
  • manager/supervisor feedback
  • observation and reflective supervision

Stage 4: RESULTS

This evaluation stage provides information on whether the learning activity has resulted in improvement in targeted outcomes. It will take time to plan, gather information and analyse results. Considerations for planning include:

  • clearly identified targets – what result do we want from this learning activity, and how will we measure change? For example, increase in the use of multi-agency chronologies; more children reporting that they understand their child protection plan; more parents reporting that they felt included/respected at CPPM
  • evaluation at this stage is more easily achieved through quantifiable results – e.g. key performance indicators (KPIs) or learning outcomes
  • audits
  • surveys
  • supervision/appraisals

Example: using Kirkpatrick's Evaluation Model to determine the effectiveness of a training program

Before learning activity - Establish a "baseline" through: Asking learners to rate their knowledge, understanding and skill on the booking form Undertake a casefile audit and/or identify relevant Performance Indicators

STAGE 1: REACTIONS Immediately following learning activity - Ask learners to complete an evaluation feedback survey Ask workers and managers to reflect on their learning through supervision Utilise observation /supervision of learning provider to reflect on training content and quality

STAGE 2: LEARNING 3 months after learning activity - Use a follow-up survey to ask participants to: Reflect on how well the learning met their practice needs Rate their knowledge understanding and skill Ask workers and managers to reflect on their learning through supervision and provide feedback Compare results of above to Stage 1 / baseline information

STAGE 3: BEHAVIOUR 3-6 months after learning activity - Ask participants for examples of how they have transferred learning to practice through: Follow-up surveys Individual practitioner interviews or focus groups Gather reflections from managers/ supervisors Seek the views of services users on any changes to service provision

STAGE 4: RESULTS 6 months + after learning activity - Evidence changes to baseline information through performance indicators, casefile audits etc Analyse all information gathered at stages 1-3



Back to top