Domestic homicide reviews: identifying best practice in learning lessons and implementing change

This working paper outlines 15 aspects of good practice to be considered in the development of a domestic homicide and suicide review model for Scotland. It identifies existing challenges with implementing recommendations from reviews and considers how to define and measure success and impact.


3. Best Practice Considerations for Learning Lessons from Domestic Homicide Review

3.1 Reviews and reports

1. Reviews should go beyond a simple description of the case and a timeline of events, instead taking an in-depth analytical and reflective account of why certain decisions were made and actions taken (or not), what was done well, and where opportunities may have been missed.

A good quality review will seek to go beyond a simple description of the case that addresses what has happened, and instead produce an in-depth and ethnographic review of why events unfolded the way they did (Devaney, 2023). Review teams should assess domestic abuse related deaths as the dynamic and complex incidents that they are, and should seek to track these cases as they have moved through systems, identifying and attempting to understand decisions that have been made and where there may have been missed opportunities for intervention (Websdale, 2020).

Taking this approach includes understanding professionals that have interacted with the case as being part of broader systems, and whose decisions are therefore guided by certain rules, policies, and practices, but also by the less visible mechanisms of organisational cultures and resource constraints (Websdale, 2020). This means that the review cannot focus solely on individual decision making. Rather than using hindsight to construct a linear narrative of how events unfolded, or to characterise a missed opportunity as a straightforward individual or organisational failing, a more in-depth examination which attempts to understand the factors that influenced decision-making at the time will likely produce more effective analysis and create better opportunities for learning (Devaney, 2023).

It may be beneficial for review teams to have the opportunity to conduct this in-depth analysis through independent examination of the relevant materials of the case, rather than relying solely on agency accounts and summaries. This would differ from the process in England and Wales, where members of statutory agencies complete Individual Management Reviews detailing their involvement with the victim and/or perpetrator (Home Office, 2016). Although the English model includes guidance designed to ensure rigorous analysis can still be carried out (such as creating quality assurance procedures and ruling that the review panel should not solely be made up of those responsible for writing agency summaries) (Home Office, 2016), this approach has been criticised as a model of “marking your own homework” (Centre for Women’s Justice and Imkaan, 2023, p.73). An alternative model is the approach taken in New Zealand, where the panel chair and lead co-ordinator review materials themselves, allowing for independent analysis (Tolmie et al., 2017).

As well as highlighting missed opportunities for intervention, an analytical account of a case should also consider where there have been examples of good practice and effective working. Learning reviews often have a problem of negativity bias where only failings are highlighted in a review, which can contribute to defensiveness and blame culture, and also limit opportunities to learn from positive examples (Chantler et al., 2019; Boughton, 2021). Analysis of effective practice alongside acknowledging what has gone wrong is featured in the guidance on both adult protection and child protection reviews in Scotland (Scottish Government, 2021; 2022).

2. Reviews should be established from the outset as an opportunity for collective learning and improvement, with commitment to the process and open and productive discussions being encouraged in pursuit of this aim.

The effectiveness and success of a review is highly dependent on the investment and participation of those involved, both in the conduct of the review and in the progression of the learning produced. It is crucial, therefore, that the purpose of conducting the learning review is communicated clearly to convey its importance and mutual benefits, and that the culture of the process is such that discussions can be open and honest and can therefore facilitate in-depth and effective learning.

A key element in encouraging open and honest discussion of a case is to ensure organisations are aware that the review is not about apportioning blame for the death. If agencies are concerned about being found to be at fault in some way, this can create defensiveness and an unwillingness to fully engage with the process, which can stifle attempts to understand the full picture of what happened (Haines-Delmont et al., 2022). Inquiries therefore should be taken forward in the pursuit of accountability, not blame – encouraging agencies to be open and honest while acknowledging the importance of recognising failings and identifying where improvements need to be made (Sheehy, 2017; Websdale, 2020; Haines-Delmont et al., 2022). In particular, frontline staff will feel vulnerable to blame, so there should be support for them throughout the review process, as well as transparency about how other investigations or disciplinary processes (where relevant) may operate alongside the learning review.

Review teams should include a diversity of viewpoints and areas of expertise so that cases are looked at through a wide-angle lens, and these teams should approach the review process as an exploratory investigation and an opportunity for in-depth reflection (Websdale, 2020). Panel members should be encouraged to play devil’s advocate and ask questions which may be challenging. The process should be aimed towards producing a meaningful deep dive of the case and discovering the most effective learning to be gained, rather than being a procedural tick-box exercise which seeks to align its findings with predetermined conclusions and assumptions.

Reviews should be established as having a clear and coherent purpose from the outset, so that review team members have direction in their task and can understand the value of engaging with the process (Boughton, 2021). It should be emphasised to those involved in the process and in taking the learning forward that this is an opportunity for improvement in practice and to make people’s lives better, and that they are instrumental in creating that change. If participants feel united around this common goal, and feel that their knowledge, expertise, and actions are valued as key contributions , this will increase engagement and investment in the process and create a better environment for more effective learning (Rowlands, 2023).

3. Reviews should ensure that review chairs and participants are trained and supported on an ongoing basis to fulfil their role effectively and to participate fully in the review process.

A strong and skilled chair is emphasised as essential to the effectiveness of a review, in being able to manage a complex and difficult dialogue between stakeholders and to obtain information from agencies without causing defensiveness (Haines-Delmont et al., 2022). A chair needs to be able to work through barriers and conflicts that can disrupt productive dialogue between organisations and to create a space where participants can work together effectively (Rowlands, 2023). Being a skilled chair also means being able to engage effectively with families in a way that is both person centred and trauma informed, and that reflects the sensitivity and complexity of domestic abuse.

With this role being so essential to the success of the review process, it is important for the model to consider the selection and ongoing training of chairs, and how this ought to be done to maximise their effectiveness (and therefore the effectiveness of the reviews). It has been suggested by Boughton (2021), for example, that the chair position should be nationally recognised and accredited, that all chairs should receive mandatory training on their role, and that an accessible online resource should be created for chairs to share best practice. In Northern Ireland, three independent chairs have been appointed by the Department of Justice and will initially remain in post for three years, while being allocated each review on a rota basis (Department of Justice (Northern Ireland), 2022). This creates a small, consistent pool of high-quality chairs to undertake reviews.

It is also important that the other review panel members are continuously trained and supported to fulfil their roles effectively. In England and Wales, stakeholders have reported having limited access to training on how to participate in a review process, instead often learning as they go (Rowlands, 2023). This may limit the capabilities and confidence of panel members to engage effectively with a review and to generate quality analysis, reflections, and learning. It is important therefore that review teams are supported to fulfil their roles effectively, and are provided guidance on how they should be carrying out reviews.

It is also key that review participants are given adequate support and resources to carry out the analytical work required, and that this fits within their existing regular workloads. This could involve, for example, a buddy-style system with two staff members of varying experience and seniority participating in the review together in order to share the workload (Boughton, 2021). Involving managers alongside frontline practitioners is already advised for adult and child protection reviews in Scotland, both to ensure that there is opportunity for analysis from those with differing perspectives, and to generate immediate learning that can be taken back into practice (Scottish Government, 2021; 2022).

4. Reports should follow template guidance to ensure consistency in quality and information gathered, and should then be held in a national repository, to facilitate comparison and disseminate learning.

While reports need to be flexible to reflect the different dynamics in different types of cases, there should be a certain amount of consistency in review outputs supported by clear guidance. The lack of a consistent report template or format can make it difficult to determine why certain pieces of information are missing from a report (Stanley et al., 2018), can create inconsistencies in quality and detail of reports (Robinson et al., 2018), and can make comparisons across cases difficult (Devaney, 2023). Therefore, guidance on how reports should be written – such as the information that should be collected, the structure of the report, the depth of analysis required – should be available, while reflecting the disparities inherent to reviewing different case types. Similarly facilitating the goal of comparison across cases and clearer opportunities for learning, reports should be made available in a national repository (Robinson et al., 2018; Rowlands, 2020; Haines-Delmont et al., 2022). This enables learning to be co-ordinated nationally, ensuring that lessons arising from each case are accessible beyond the local region.

5. The Review Model should establish a quality assurance mechanism to review reports prior to sign-off and publication, and to provide feedback for future review conduct.

To ensure that reports are following appropriate guidance and producing quality analysis and recommendations, a quality assurance mechanism should be established to review reports prior to their sign-off and publication, with the power to require reports that fall short of standards to be resubmitted. An analysis of 124 domestic homicide reviews in England and Wales included 36 reports which required resubmission, with the most common reasons for this being: not following report templates and therefore required information being omitted; insufficient depth of analysis; typographical and grammatical errors; further anonymisation required; and a lack of evidence to support statements (Potter, 2021). This suggests that Scotland’s model would benefit from having a quality assurance process in place to ensure reports produced follow appropriate guidance. This would provide an accountability mechanism for review teams to ensure they generate quality review outputs that clearly articulate key learning points and information.

However, it is important that this is an open, transparent, and consistent mechanism which makes its guidance and reasons for requesting resubmission clear, and should not be used to dilute or censor findings that a review panel has made. Additionally, while the quality assurance process would primarily focus on the quality of the reports, it is important that this focus is not a narrow one which directs resources to look only at the process that has already taken place rather than looking forward. The quality assurance process should – in addition to requesting necessary changes to reports prior to sign-off and publication – be a mechanism for providing feedback and improvement opportunities for future reviews, to enable review teams to evaluate their processes and outputs and continuously improve these practices.

3.1 Recommendations from reviews

6. Recommendations should be produced from engagement with all relevant stakeholders to ensure feasibility and maximise effectiveness.

Rather than the review team making recommendations to agencies based on what they assume will work best and be most effective, recommendations should be created through thorough active engagement with stakeholders who understand how their organisations work in practice, and will therefore be able to help craft recommendations that are feasible and workable. This should not mean that agency representatives are able to dilute the review’s recommendations or avoid implementing changes that may be complex but would be possible. However, stakeholder engagement should facilitate the generation of meaningful recommendations that are feasible within the structures and processes of the organisations, and increase agency investment in the process, making it more likely that improvements will be implemented. New Zealand’s process, for example, includes extensive stakeholder engagement when creating recommendations, to ensure that recommendations are practical and to encourage agency buy-in (Tolmie et al., 2017).

7. Recommendations should be CLEAR (case for change, learning orientated, evidence based, assign responsibility, review).

It is crucial that the recommendations that come out of reviews are constructed in a way that maximises their effectiveness and puts the organisations responsible for their implementation into a good position to take the changes forward. This means creating recommendations which clearly outline specific, measurable, and tangible actions to be taken, rather than vaguely worded suggestions which could be easily dismissed or not understood, and therefore have no hope of implementation (Haines-Delmont et al., 2022). Guidance for domestic homicide review recommendations in England and Wales is that they are SMART (specific, measurable, achievable, relevant, and time-bound) (Potter, 2021), while guidance for recommendations produced from adult and child protection learning reviews in Scotland is that they are CLEAR (case for change, learning orientated, evidence based, assign responsibility, review) (Scottish Government, 2021; 2022). Using this same CLEAR formulation for recommendations from Scotland’s domestic homicide and suicide review model would ensure that Scottish learning review guidance is aligned and consistent across different processes. Recommendations should be justified with a clear rationale and demonstrably drawn from evidence in the review (Scott et al, 2022; Devaney, 2023), and, where possible, should signpost information on good practice elsewhere (Scott et al., 2022). Doing this positions recommendations as achievable and as serving a purpose – therefore increasing agency investment in carrying them out and preventing the perception that reviews are merely procedural exercises. Recommendations should be translated into a clear action plan, which can then be monitored and audited to track implementation and outcomes.

8. Reviews should focus on creating a small number of meaningful, key recommendations to be generated into an action plan, and avoid over-burdening agencies with an excess of superficial or unrealistic recommendations.

Similarly geared towards ensuring suggested changes from reports are feasible, reviews should also focus on creating a small number of recommendations that will generate the most meaningful impact within the current and future (where known) capabilities of the system or organisation. This will be more effective in creating genuine change over time than overwhelming agencies with a long list of recommendations that are either superficial or idealistic. Reviews should suggest a small number of key improvements based on an understanding of how the current system is operating, including existing difficulties and constraints, and therefore identifying where effective change could be made in practice, rather than creating recommendations based on an idealised version of how organisations ought to be able to operate (Devaney, 2023). This principle should not prevent reviews from envisioning long-term change and more significant impacts – the importance of targeting wider social and cultural transformation is discussed in the next section. However, when it comes to creating key recommendations and action plans for agencies to take forward, these should focus on generating the most meaningful improvements that are within organisational capabilities and building on these changes over time – aiming for quality over quantity. Recommendations should be created with impact in mind, considering what it would mean for the recommendations to be successful and identifying how the desired improvements may be measured.

9. Recommendations should encompass local and national actions as well as system-level changes, placing the case within a wider context and connecting to previous reviews which will create a systems-focused approach to learning and improvement.

To ensure suggested improvements are comprehensive and effective, reviews should generate recommendations for both the local and national level. This, however, must be done in a way that ensures ownership is taken of national learning and not just local – for example, there are existing problems in England and Wales with national bodies not being made aware of learning gained from reviews and there being limited accountability for this learning to be enacted (Boughton, 2021). It is important that this is addressed (effective dissemination of learning is discussed as a later point), and that reviews are able to cover both the local and the national picture. This is partly to reflect how the two are interconnected, with national matters affecting the operation of local agencies and the outcomes of local cases. It is also to ensure that learning identified in a specific region (which is likely to be relevant elsewhere) is shared nationally, so that the lessons are not confined and limited to only that area. This should include best practice learning.

As well as recommendations targeted at local and national bodies, reviews should also seek to make recommendations targeted more widely and systemically, in terms of transforming the way that domestic and family violence is considered and handled at a social and institutional level (Tolmie et al., 2017). The review needs to put forward a vision of how domestic abuse could be better conceptualised, identified and tackled, and communicate how individually targeted recommendations and actions contribute to a broader theme, or theory, of change. This may be accomplished through formulating short-, medium-, and long-term recommendations (and outcomes), to balance taking immediate feasible actions alongside envisioning how those actions contribute to a longer-term goal. Recommendations need to also take a holistic approach to changes across an institution and organisational management – they should not be solely focused on frontline practitioner behaviour, but rather targeted at a higher, structural level, with the aim that the agency culture can evolve to support changes in frontline work.

While a domestic homicide and suicide review is in part about memorialising and giving a voice to the victim, ensuring that their story is told, it is important that the review reaches beyond the individual case being examined and sets its analysis within a wider systematic context. This is to maximise the opportunity for overall, systems-level improvement – extrapolating the key opportunities for learning from an individual case to reflect on what needs to change on a broader scale. New Zealand takes this wider-lens approach, focusing not on the specific case but on the wider system response, and orientating its analysis and recommendations onto what will prompt transformational change (Tolmie et al., 2017). Rather than continuing to make recommendations to individual organisations, New Zealand’s process has evolved into proposing systemic improvements and changes in conceptual thinking about family violence, seeking to reconstruct the system which addresses this violence rather than making changes to individual parts of what exists currently (Tolmie et al., 2017). This approach provides opportunities for more ambitious and effective learning, rather than limiting learning to specific problems that have arisen in individual cases. It is important that individual cases are considered as part of a broader context, and that attempts are made to reach across existing and previous reviews to bring the learning together.

10. Reviewers should be mindful of recommendations made in previous reviews, and consider whether making the same recommendation is warranted, or to refer to the previous review recommendation as still being relevant.

The effectiveness of other models has been limited due to the lack of monitoring of recommendations from completed reviews, leading to reports identifying the same problems and therefore offering the same recommendations (Devaney, 2023). Making recommendations that have already been made in previous reviews may contribute to overwhelming agencies with long lists of superficial recommendations, as well as failing to address why the recommendations have previously not been implemented. If recommendations have not been implemented, or if they have been implemented but not produced the desired changes, inquiries should be made as to why this is, and consideration given to whether those recommendations need rethought. This could be a key role for an oversight or quality assurance body that reports are submitted to, which would be able to assess reviews as a whole and keep track of recommendations and their impacts over time.

Repeating the same recommendations is not conducive to the goal of viewing reviews as interconnected processes, in which each report should consider the context and outcomes of previous reviews. The model could therefore follow the example set by Ontario’s system, where reports – if identifying issues or recommendations already highlighted in previous reviews – may re-record recommendations for information purposes only, or simply note ‘no new recommendations’ (Scottish Government, 2023). It is important that this is done in a way which acknowledges that the previous recommendation is still relevant to the new case, and seeks to identify and address why this is.

3.2 Monitoring and evaluation

11. Reviews should establish an accountability mechanism, where agencies are required to provide a detailed progress update on the effective implementation of recommendations.

Few models feature a mechanism for follow-up once reviews have been completed and recommendations made, leading to an inability to assess the effectiveness of reviews and whether the recommendations have led to the desired improvements. This lack of monitoring focuses the process on the conduct of the review itself rather than on the system changes that should follow from it (Devaney, 2023). A few models, for example in New Zealand and Ontario, have a response regime in place, which requests updates from agencies on the status of recommendations and what steps have been taken to action them (Scottish Government, 2023). This ensures the process extends beyond simply making the recommendations to overseeing their implementation, and also provides an opportunity to identify problems that may be preventing or hindering agencies from acting on recommendations and/or achieving the desired change. This could then contribute to subsequent reviews by furthering understanding of recommendations that have been made previously and why they may have been unsuccessful.

In practice, response regimes can be superficial – Sheehy (2017) notes, for example, that while the Ontario system requires responses from organisations within six months of the recommendation being made, these status updates are self-evaluated, and are not challenged or questioned. It is important therefore that the process in Scotland is rigorous and that organisational updates are not the sole mechanism of accountability and monitoring. The Case Management Review process in Northern Ireland, for example, requires agencies responsible for implementing actions to report on progress on a quarterly basis, but this mechanism is reinforced with oversight from the Safeguarding Panel which monitors the implementation of action plans (Safeguarding Board for Northern Ireland, n.d.).

12. Reviews should establish an overarching body to monitor the implementation of recommendations and the ongoing effectiveness of reviews, which will provide regular progress reports on key themes, actions that have been taken, and impacts that have been achieved.

Establishing an oversight body or function which would be responsible for following up with agencies and taking the learning forward from reviews would create a more robust monitoring system than relying on self-evaluated responses from agencies alone. This body or organisation would be responsible for taking ownership of the reviews, collating reports, and synthesizing and disseminating key learning (Robinson et al., 2018). In addition to monitoring learning and recommendations from in-depth reviews of cases, this body could also generate aggregate data from reviews, looking across cases to identify patterns and monitor national statistics (e.g. homicide statistics). Responses to the Scottish Government’s targeted engagement consultation were mixed on where reports should be held and who should be responsible for oversight, though the Scottish Government featured in most responses, either as the organisation responsible or involved through working with other bodies (Kurdi, 2023).

In Wales, where a Single Unified Safeguarding Review process has been introduced, a co-ordination hub has been established to ensure recommendations and action plans following from reviews are taken forward, as well as publishing thematic reports and briefings to enable wider learning (Welsh Government, 2023). Establishing this kind of oversight function is an essential part of keeping the focus on what happens post-review, and ensures that there is an organised and co-ordinated effort to make the learning gained meaningful. Additionally, having this oversight mechanism which is external and independent to the review panel means that there is a central hub for continuous learning that can carry on beyond the review taking place and beyond the involvement of individual panel or organisation members – this body can help to keep the process continuous and embedded into institutional memories. This is helpful for establishing a longer-term evaluation of changes implemented from reviews, in terms of whether proposed recommendations that have been enacted are working effectively, and whether the review process itself needs to be re-assessed and changed.

13. Reviews should explore creative and effective ways to disseminate the learning produced from reviews to ensure lessons are taken forward and embedded into organisational practice.

It is important that creative and effective methods to disseminate the learning from reviews are explored as part of the development of the model, to ensure that lessons can be embedded into organisational practice. The reports themselves, while important, are often dense and lengthy, and are not the most effective tools to ensure learning reaches where it will make the most impact on practice. Alternative methods of conveying key information therefore need to be considered. This could include, for example, webinars (Robinson et al., 2018), infographics, briefings, and regular, reflective, in-person learning events designed to consider key findings and share reflections. These methods of sharing information should also be designed with the media and public awareness in mind.

This dissemination of learning should encompass not only lessons and recommendations from an individual review, but also broader learning gained from reviews over time, in line with the approach of viewing individual cases in wider context and in relation to each other. Opportunities for regular reflection should be created, where those involved in review processes and those with lived experience with relevant services (such as victims’ families) can engage with key findings. This would provide an opportunity to consider whether and how experiences of engagement with services have improved as a result, as well as what more needs to be done.

Effective dissemination of learning again requires national co-ordination of reviews and review teams to prevent reflections remaining isolated to the regions and practitioners involved in a specific case. Dissemination of learning should also be considered a key part of the role of panel members, who could take responsibility for presenting findings and suggested improvements back to their respective fields. They will be in a position of understanding how the organisation works and therefore how to share the learning effectively, as well as being able to take advantage of their contacts and credibility within the profession. This should be considered as a part of the role and responsibility of a panel member position and adequate time and resource allowed for panel members to undertake this important post-review work.

14. Reviews should evaluate the success of the model on a variety of measures, as well as recognising the value inherent in simply conducting the process effectively.

The measure of success of the model cannot solely be the reduction in number of deaths, due to the impossibility of determining a causal relationship between the two, which would set the review process up for failure when success cannot be established (Bugeja et al., 2015). Despite this, there is inherent symbolic value in setting out to reflect on cases in order to reduce deaths, in positioning domestic abuse as something that is preventable and will not be tolerated by society (Bugeja et al., 2015; Dawson, 2021).

While recognising this symbolic value of the goal to reduce deaths, and how this goal may be beneficial for increasing agency investment in the process, it is important to establish alternative measures of success for reviews, and to consider these when reflecting on and relaying the purpose of conducting a review. The process should be underpinned by a clear Theory of Change which articulates the purpose of the review, the desired changes and improvements, and the actions that will be taken to achieve them (Rowlands, 2020). The recommendations that follow from reviews should be aligned with the key principles, goals, and purpose of conducting the review, and it should be clear how the impact of recommendations will be monitored and measured to assess whether they have been effective in achieving their purpose.

A key measure of success, therefore, would be whether there have been improvements in service responses and people’s experiences of interacting with the system (Sheehy, 2017; Rowlands, 2020). Developing a domestic homicide review model for Scotland was identified in the Equally Safe 2017 delivery plan as a key action to improve system response and support those affected by violence and abuse sensitively, efficiently, and effectively (Scottish Government, 2017). If an oversight mechanism can establish through auditing and monitoring that changes have been implemented and that there is evidence that these changes are improving practice, it could be said then that the review process has been successful. Evaluating the success of reviews is an underdeveloped area of practice.

There are different ways of measuring the impact of recommendations that could be explored and further researched. There could be engagement with professionals to gain perspectives on their practice following a review, and to measure their knowledge, attitudes and reflections on practice and whether they feel they are empowered to make a difference. Engagement with victim-survivors and families of victims would also serve to better understanding of the impact of review recommendations and whether associated changes in practice have improved service user experiences

Changes in public awareness and attitudes could be considered another potential measure of success – for example, whether the recommendations and associated actions have made an impact on how the public perceives the key issues and considers what role they might play in combatting domestic abuse. Considering domestic homicide and suicide as the extreme end of a wider continuum of gender-based violence (GBV), we might also want to consider whether there has been a reduction in other forms of GBV as another outcome that learning reviews can contribute too.

Beyond these outcome-focused measures of success, it could be argued that there is success inherent in simply conducting the process itself well, in that it provides an opportunity for dialogue and co-operation between agencies in an effort to improve practice and multi-agency working, as well as prompting practitioners to reflect on their past and future conduct. Guidance for adult support and protection and child protection reviews in Scotland states that there should be a ‘thread of learning’ throughout the review process, where learning not only comes from the published report but rather is produced and developed through dialogue between the review team (Scottish Government 2021; 2022). It is important then that the review process itself is continuously evaluated, which could be done through engagement with review team members, practitioners, and victims’ families, who would be able to provide their perspectives on the value and effectiveness of the process and the recommendations that are developed from it.

15. Reviews should view domestic homicide and suicide reviews as a continuous process of reviewing, monitoring, and evaluating, with the report itself being only the first step towards organisational and system change.

It is crucial that the process of producing a domestic homicide and suicide report is recognised as only the first step towards creating changes and improvements within the system. The process as a whole should be viewed as a continuously evolving practice of reviewing, monitoring, and evaluation (Scottish Government, 2023). The effectiveness of other models has been limited where there has been a lack of long-term oversight, and a focus only on the reviews themselves rather than a national monitoring and auditing of their impacts (Haines-Delmont et al., 2022; Devaney, 2023). For the development of Scotland’s model, it is key that this ethos of continuous practice is embedded within every element of the design, that the purpose of the reviews is clear and articulated throughout, and that each stage of the process is orientated towards identifying and implementing change. This cycle of evaluation would include monitoring the recommendations put forward by reviews – not only whether they are being implemented, but how effective they are at generating desired outcomes. Additionally, the model itself would be reviewed regularly and subject to improvements in its ways of working, depending on the evaluation of its impacts and effectiveness.

Contact

Email: dhsrmodel@gov.scot

Back to top