Annex 1: A note on terminology
This research report discusses issues relating to performance management in the Scottish planning system. A review of policy documents on this topic – from a variety of sources - reveals a lack of rigour, precision and consistency in the use of terminology, and an inadequate understanding of the basic principles of performance management. In particular, there is a persistent failure to distinguish between the outputs, outcomes and impacts attributed to planning.
In the interests of clarity and consistency we have defined the key terms used in this report, showing how they should be used in the planning context. The key terms cover a sequence of steps which are summarised in the diagram.
Inputs refer to the human, financial and other resources available to planning authorities and, in principle, to other agencies involved in the planning process. The other resources may include legal powers, policies and published guidance. The inputs are deployed to deliver planning activities.
Activities describe the work that planners do; in particular, the creation of development plans and other local policies, and the management of the development planning system. These activities generate outputs and outcomes.
Outputs are the immediate, short-term results delivered by the development management process: in particular, approvals for specified quantities of built development and associated obligations. Output measures can be used to monitor the efficiency of the planning process: nothing has been built yet, but planning consents are an essential precondition for development on the ground – the outcomes.
Outcomes are the tangible results of the planning process. They capture the quantum of development started and completed and other associated measures; they can be analysed to measure the time lag between approval and delivery. Outcome monitoring provides essential evidence for impact evaluation.
Impacts are the medium to long-term strategic effects of planning, as assessed by post hoc evaluation. Such effects may be either:
- direct – for example, achieving better place quality, or
- indirect – for example, contributing to wider economic, social or environmental goals.
Impact evaluation may draw on a range of sources, including quantitative monitoring data and qualitative assessments.
Monitoring is "the collection of data, both during and after implementation to improve current and future decision making" ( HM Treasury, 2018). This applies especially to planning activities, outputs and outcomes. Evaluation is "the systematic assessment" of an intervention's design, implementation, outputs and outcomes to establish the direct and indirect impacts of planning. These processes are connected: monitoring data provide an important source of evidence for evaluation studies. HM Treasury states that "both monitoring and evaluation should be considered before, during and after implementation".
Monitoring and evaluation are both key elements of the outcomes-based Scottish approach to Public Service Reform, and reflected in the National Performance Framework. The Scottish approach (discussed in Section 4 of the report) explicitly acknowledges the complexity of delivering change through public sector intervention. It argues that we should be careful about claiming that the public sector causes change, and should focus more on the contribution public sector organisations have made, taking account of social, economic and environmental conditions and the roles of other organisations. Some outcomes-based practices assume a linear and direct relationship between intervention and outcomes/impacts; the Scottish approach is predicated on the assumption that "the intervention interacts with multiple other factors to influence the outcomes [and impacts]" (Cook, 2017).
There is a problem
Thanks for your feedback