Primary care: national monitoring and evaluation strategy

Our approach to Scotland's national monitoring and evaluation of primary care reform up to 2028.


Our Strategic Approach to Monitoring and Evaluation

This Strategy sets out the overarching national approach and principles for how we will evidence and understand the reform of primary care between now and 2028, through varied and ongoing evaluation research and data analysis.[17] The Scottish Government and its partners will use the approach outlined in this document to prioritise research and analytical activity and to allocate resources. Over the 10 years of the strategy, we will build the evaluation evidence base through, for example: bespoke research projects commissioned by the Scottish Government, using methods appropriate to the specific research questions; synthesis of data and findings from others’ research and evaluation within Scotland and internationally; and expansion and improvement in primary care statistics. Fundamental questions for national evaluation of primary care will generally demand the triangulation of different sources of qualitative and quantitative evidence. 

Much of the considerable work needed over the next decade to ensure that we are capturing and understanding changes in primary care will not be undertaken by the Scottish Government or the national health boards.[18] It will happen in diverse places, generating evidence for and about primary care. There is merit, therefore, in having a shared vision and principles for evaluation, a shared outcomes framework and agreed national indicators which offer the basis of an approach that delivery partners and researchers could apply and adapt. This would support the comparability of evidence across the country and over time.

The strategy incorporates the Primary Care Outcomes Framework, which maps out activities and policies with their relationships to intended outcomes (Annex 1[19]), and introduces a set of key National Indicators for Primary Care Reform for system-level measures. An annual Primary Care Monitoring and Evaluation Workplan will set out the priority research projects and data activities for that year. We will use the outputs from this work to populate the Outcomes Framework as evidence and learning emerge.

Scope of the Strategy

For national government, there are two main drivers for evaluation: learning and accountability.[20] National policy evaluation must be objective, dispassionate, proportionate and rigorous. Evidence needs should be carefully identified and prioritised as early as possible, with sound methods for data-capture built into projects and policies (that have clear articulated and plausible outcomes), to maximise the potential for meaningful evaluation and learning. It is important that monitoring and evaluation are not viewed as being done to those who are responsible for delivering primary care or as a post hoc activity. Rather, we want to foster an evidence-based culture within primary care, where evaluation, monitoring and other intelligence needs are considered from the early phases of conceptualising and shaping a new way of working or a new policy. This includes an appreciation of the complexities of policy development and implementation within systems as well as the relevance of different forms of evidence, at every step. 

The focus of this strategy is: informing strategic policy decisions; understanding the impacts of policy at a national level; and being able to give a good, evidence-based account of what difference primary care reform has made for individuals and communities, the workforce and the system, especially at scale. Its outputs and the processes involved in its delivery will contribute to the ongoing evolution of our thinking about the purpose and potential of primary and community care 

We recognise that, below the national level, learning needs to be captured and fed back in ways and over timescales that are better achieved through improvement activity and local self-evaluation. These generate evidence which can contribute to national evidence of what works and why. The role of Healthcare Improvement Scotland (HIS) is core here (especially through the improvement and evaluation support they provide to Health and Social Care Partnerships and GP Clusters, their ihub[21] and the Scottish Health Council[22]) to support effective public and service user engagement in the design and delivery of primary care services. Delivery of the evidence for this strategy will partly depend upon the wider knowledge generation and research capacity-building of the Primary Care Evidence Collaborative and its member organisations, and the activities of other generators and funders of evidence, including national research councils, the Scottish Government’s Chief Scientist’s Office, and academic units not represented on the Collaborative.

Our approach acknowledges that local data collection, small-scale policy and service evaluations, self-evaluations, improvement activity and learning, economic analysis, modelling, and research (including clinical studies) contribute to a broad evidence base and may relate to, or be part of, wider programmes of monitoring and evaluation.[23] We will use a phased approach across the ten years, with an evolving portfolio of studies and data collections mapped against actions, activities and intended outcomes in the Outcomes Framework to capture learning and analyse the contribution of different actions and inputs. In some cases, process data and process evaluation will be more appropriate and helpful than analysis of outcomes which will take longer to emerge. We expect that the principles and approaches of Realist Evaluation, Contribution Analysis and Implementation Science will shape our approach over the decade.

What will we monitor and evaluate?

We are guided by the principles that:

  • evaluation should only happen when there is a reasonable assumption that genuinely new and useful learning can be generated;
  • research and evaluation must be proportionate, well timed, and have clarity of purpose. 

Work undertaken to delivery this strategy will focus on policies and changes intended to generate impacts that will be discernible at the national (‘macro’) and regional, pathway or sectoral (‘meso’) levels within the primary care system. Our approach is concerned with changes with the potential to be scalable from a local to wider geography; or which involve significant investment, systemic change or risk. Clearly, not all tests of change or new ways of working across Scotland in the coming decade will be subject to evaluation or research - nor should they be. It is also not for central government to decide how evidence is used to inform local or cluster-level decision-making, and service delivery or clinical practice (the ‘micro’ level), or how learning is captured from those and then acted on. This strategy, however, offers transferrable principles, methods and core research questions, and we have a responsibility to encourage the development of a more intelligence-informed primary care system, to support an improved data infrastructure, and to work with national partners to promote evidence and appropriate methods.

Our early monitoring and evaluation priorities are set out in more detail below. Much of our focus in the early years will necessarily be on how we integrate evidence from across diverse programmes and projects which are testing new models of care. Criteria for prioritising evidence gaps and the deployment of national evaluation resources are likely to include:

  • Level of investment (not just financial)
  • Public commitment to report on progress or impact
  • Risk – real or perceived
  • Public profile of the project or policy
  • What matters to people using services in relation to the topic or policy
  • The evaluability of the project or policy
  • How evidence-based or innovative is the policy or the model being piloted. Ideally, policies being introduced and models being piloted should be founded on a sound evidence base. However, there may be occasions where it is justified to make changes and run innovative tests for which there is little current evidence, as there is a reasonable underlying logic that activities will lead to positive outcomes. Some models will have been developed in quite different contexts to the test environment, in which case issues of fidelity, adaptability and generalisability will be important. 

Contact

Email: socialresearch@gov.scot

Back to top