7. Implementing the PMF
7.1 The literature review has confirmed that finding a robust and practicable system for measuring the performance of planning systems has proved an elusive prize. A number of experts have produced sophisticated performance frameworks (see, for example, Arup (2011) and Wong and Watkins (2009)) but the complexity of these models, combined with a background of resource constraints have proved to be barriers to progress.
7.2 The recommendation, contained in the independent review, that there should be a greater emphasis on monitoring the outcomes of planning (and evaluating the impacts) commands broad support. Planners and professionals recognise the need for greater transparency and accountability, and for focusing on the difference that planning makes. Performance indicators empower citizens and communities: they can be a powerful tool for communications and advocacy. Monitoring the development management process is important, but it is not enough.
7.3 There is also a consensus that such a change needs to be approached in a pragmatic spirit, proceeding by stages if necessary. We noted in Section 5 that practitioners agree that "we should not let the perfect be the enemy of the good". The absence of ready-made models in the UK and – as far as we can judge – further afield provides confirmation that a fully comprehensive performance management framework, embracing both the direct and indirect benefits of planning, is still some way off. Even if such a model were available, implementing it would, in all probability, be impracticable in the near term.
7.4 But we can still do better, and there is an immediate opportunity to take practical steps to improve the performance management regime and to make it more transparent and outcome-orientated. There is, in our view, a strong case for an incremental approach, using some early wins to create a platform for the development of a more ambitious performance management system in the medium-term. There may be merit in commissioning pilot studies to test more sophisticated approaches and methodologies.
7.5 The consultations revealed support for this approach. As indicated in Section 5, the new PMF might be introduced in three stages as shown in Figure 7-1:
- Stage 1 would involve the extension of the existing Planning Performance Framework to include the monitoring of all planning outcomes above an agreed threshold. By reporting the start and completion of developments, this would enable robust and reliable reporting on progress towards housing, workspace and other performance targets.
- Stage 2 would see a first tranche of evaluation studies, focusing at this stage on the direct (placemaking) impacts of the planning system
- Stage 3 would see the scope of the evaluation programme expand to include assessments of the indirect impacts on National Outcomes and other policy objectives.
Figure 7-1: Phased implementation of the performance management framework
7.6 In the following paragraphs we outline some preliminary thoughts on key elements of the performance management framework: monitoring, evaluation and the learning/feedback loop. These ideas draw on the literature review, our consultations and the workshop discussion. They have not been tested and they should not be treated as policy recommendations, but they are intended to inform the debate. We anticipate that they will be considered by the High Level Group on Planning Performance, HOPS, the Key Agencies Group and in other forums. We assume that the national planning performance coordinator will have a key role to play in developing proposals and supporting implementation. The detailed design and development of the PMF should involve all the key stakeholders, including citizens and communities; a system imposed from the top down will inevitably fail.
7.7 In process terms it is important to distinguish between outcome monitoring and impact evaluation. They are separate, though connected, activities, and some of their key features are summarised in Figure 7-2:
- we anticipate that the outcomes of all planning applications (with the possible exception of home improvements and other minor works) will be monitored; by contrast, evaluation is a discretionary and selective activity which may be time-consuming and costly, so local authorities and the Scottish Government will need to agree a programme of planning evaluations and decide how they will be resourced
- monitoring should focus on readily accessible, quantified performance data on a key set of planning outcomes: essentially, it should record and analyse the conversion of planning consents into completed development on the ground, and measure progress against development plan targets: evaluation will draw on this quantitative evidence, complemented by qualitative assessments, working within a framework of agreed criteria
- monitoring will generate standard reports accessible to all, presenting the results at the national and local level, and thus enabling comparisons to be made; evaluation reports, once approved, will be submitted to the Government and published online
- planning authorities will be responsible for the input of monitoring data, using guidance provided by the Scottish Government; evaluation studies will be commissioned by planning authorities and, from time to time, the Government, but they should be carried out by independent experts.
Figure 7-2: Monitoring and evaluation – key features
7.8 The key features of a basic performance monitoring regime are shown in Figure 7-3. They track progress from development management functions, through short-term outputs (planning permission granted or refused) to outcomes (development on the ground or the lack of it). These are cause-and-effect linear connections, over which planning authorities (at least in relation to activities and outputs) exercise a considerable degree of control.
Figure 7-3: Monitoring the outputs and outcomes of planning
7.9 The requirement here is for consistent, comprehensive and accurate quantitative data so that we can monitor the volume of development proposals coming forward, the "conversion rate" from applications, through approvals to implementation, and the speed of the process. It should be possible to aggregate planning authority data to generate national reports and to enable comparisons between areas, although there will still be a need for the informed interpretation of results. It does not necessarily follow that the planning authorities that rank highest on the key measures are the "best performers". Local market conditions, developers' attitudes to risk and many other factors all play a part, and some planning authorities may set the bar higher than others in terms of fit with policy, quality and conditions. Refusing an application which does not conform to the LDP, or which is of poor quality, can be a positive result.
7.10 There was a discussion at the workshop about a possible suite of outcome measures. It was agreed that the measures and targets adopted should derive from the National Planning Framework ( NPF3) and the Local Development Plan ( LDP), and that performance should be monitored both annually (to track year-on-year change) and cumulatively (to measure progress towards medium to long-term NPF and LDP targets). As indicated in Figure 7-3, a core set of quantitative measures could track completed development by type, for example:
- housing completions (including affordable homes)
- office and industrial development
- retail and leisure development
- social infrastructure (schools, hospitals, community facilities etc)
- transport and physical infrastructure
- green and open space
7.11 Workshop delegates also suggested that there was a case for monitoring a number of other significant measures, also quantitative, which might include:
- progress on the national developments (and, in the future, regional priorities) identified in the NPF
- progress on local strategic priorities identified in LDPs
- reuse and development of derelict land
- supply of effective housing and employment land.
7.12 There is a case for creating a national database of planning applications. All qualifying applications would be entered into the system by planning authorities and given a unique reference number. Planning authorities would also be responsible for data entry at the key output/outcome stages. The present Planning Performance Framework, though useful, is fairly rudimentary but a computerised system could be developed to provide real-time snapshots and generate regular, standardised national reports. The costs of developing and implementing such a system and training staff to use it would need to be determined, and it would require a one-off data entry exercise to populate the database, but the benefits – in terms of accuracy, confidence and accountability – could be considerable.
7.13 It is important to stress that we would not expect this basic system to create significant additional work, apart from a requirement to enter data (which is already available) into the computerised system at the appropriate stages in the process. Trying to achieve the same results by adapting the present clerical system ( PPF) would almost certainly be more onerous. The model described here would provide a simple and reliable means of monitoring the operation of the planning system and its success in delivering the outcomes set out in the Planning (Scotland) Bill. Crucially, it will enable planners, policymakers, other agencies and the people of Scotland to monitor progress against the key planning targets contained in development plans.
7.14 If output and outcome monitoring is the domain of linear connections, impact appraisal operates in a world of complexity, where planning interacts with multiple actors and factors to produce long-term effects – directly on place quality, and indirectly on the economy, society and the environment. It calls for informed judgement, the use of soft and hard data, and the identification of useful proxy measures. To be credible and useful it needs to be conducted by independent experts, although planners, developers, communities and public sector bodies will all have a role to play.
7.15 Evaluation studies can be a major (time-consuming and costly) undertaking. While the monitoring system needs to be comprehensive, evaluation will be undertaken selectively, on the basis of the project's scale, strategic significance and sensitivity. In this report we have distinguished between the direct impact of planning on place quality – which may be considered to be the core purpose of planning – and its indirect impact on a wider set of policy goals. The same distinction applies here, and we have suggested that it may be sensible to start by focusing the first wave of evaluation studies on direct (place) impact, before broadening out to consider other National Outcomes.
Figure 7-4: Evaluating the impacts of planning
7.16 There is a before-and-after element to evaluation which means that projects (or groups of projects) need to be selected in advance, so that a baseline assessment can be undertaken. This might provide opportunities to engage with communities (possibly using the Place Standard) and to talk to planners, developers and statutory bodies to better understand their aspirations and expectations. The bulk of the activity will, of course, take place after the event when there is a new development (or place) to assess. As shown in Figure 7-4 above, a variety of approaches might be adopted, including a re-run of the Place Standard assessment, consultations with communities and key actors, evidence that design guidance/best practice models were followed, and award recognitions. Together these sources will enable balanced judgements on the contribution of planning to better placemaking.
7.17 Looking beyond place impacts to the National Performance Framework and the National Outcomes, the problems of attribution become more challenging. Planners should expect to have a significant influence on placemaking, but only limited influence over policy domains (for example, inclusive growth, health and wellbeing, and learning) which are the primary responsibility of other agencies. Here the focus should be on how planners can make an effective contribution, and ensure that their knowledge and expertise is applied in the most appropriate way, for example by influencing the design of schools or hospitals or promoting active travel.
7.18 The development of detailed guidance for evaluating the impact of the planning system will be a substantial task. In our judgement, commissioning and formally adopting new guidance is likely to take 9-12 months. At the workshop we had a preliminary discussion about the scope of work, focusing on indicators and methodology. Delegates recognised that, while outcome monitoring should be a universal system applied to all qualifying planning applications, post hoc evaluation will be a discretionary process. Planning applications – or groups of applications – would need to be selected in advance based on factors including scale and (national or local) strategic significance. This would enable baseline studies to be undertaken before work starts and (potentially) some selective monitoring of the project while it is working its way through planning and into the implementation phase. The selection of projects for evaluation could help to encourage culture change and new ways of working, for example, by making more proactive use of design panels and/or design champions and identifying best practice exemplar schemes. Because the issues are "owned" by a variety of partners, there may be merit in planning authorities and other agencies (for example, NHS Boards or universities) jointly commissioning themed evaluations.
7.19 The distinction between the direct and indirect impacts of planning will need to be reflected in the timing of evaluation studies. The direct effects (better places) should be discernible relatively quickly, within, say, 12-18 months of completion, but the impact on other national outcomes will only be measurable after the new building/place has been in use for some time and has a chance to "bed in". An interval of 24-36 months might be appropriate, but some effects (for example, improved health and learning outcomes) may take longer to manifest themselves.
7.20 Workshop attendees had a preliminary discussion about impact measurement. They noted that hard data from outcome monitoring would be an important source of evidence, but would not be sufficient to inform judgements about better place quality or wider socio-economic impacts. Any attempt to define what makes for "a better place" will inevitably take us into contested territory (yellow book, 2017) and the domain of subjective judgements, but there was strong support for the suggestion (in the brief for this study) that the Scottish Government's Place Standard could be used to measure changes in how places are experienced and perceived. The practitioners we spoke to consider the Place Standard to be a useful, though imperfect, tool and most thought there was merit, albeit with some reservations, in using it to measure change over time.
7.21 The Place Standard (Figure 7-5) enables individuals, or groups working together, to assess 14 aspects of place quality. Responses are scored and aggregated, enabling an assessment of the subject area's relative strengths and weaknesses. Our consultees raised some important questions about the way in which Place Standard assessments are conducted and the dangers of "group-think". Practitioners know that results may vary depending on weather conditions or the time of day. There was some discussion about the appropriateness of a one-size-fits-all approach. Does it matter if a business district does not provide play facilities, or if a residential neighbourhood does not offer employment opportunities?
Figure 7-5: The Place Standard
7.22 These concerns were noted but the consensus view was that the merits of the Place Standard easily outweigh the reservations. It can be refined and improved over time and more people are learning how to use it effectively. There was, therefore, strong support for using the Place Standard as part of the impact evaluation process. Some suggested guiding principles emerged:
- the Place Standard should be conducted before planning consent has been granted, preferably at the pre-application stage, to help establish the baseline situation, and identify existing strengths to be preserved and weaknesses to be addressed
- communities should be encouraged to participate and, if practicable, the assessment should be conducted on multiple occasions and with a range of audiences to help produce more robust and reliable results
- a re-run of the Place Standard assessment should be a key element of the post hoc evaluation process; as far as possible the process should replicate the baseline stage, enabling before-and-after comparisons to be made with confidence.
7.23 The Place Standard offers a practical way of measuring how planning and development has changed places, and of testing whether those changes have been for the better. But the results will still need to be analysed and interpreted before changes can be attributed with confidence to the operation of the planning system. Evaluation studies will need to identify ways in which the planning system has influenced – or sought to influence – the quality of built development. For example, were the original proposals amended in order to enhance placemaking impacts? Was this the result of negotiations with planning officers, feedback from pre-application consultations, a response to advice from a design panel or some other factor? Evaluation will have a dual purpose: to determine whether the project has created a better place, and to determine whether (and how) the planning system has added value.
7.24 As the new system becomes established there will be opportunities to attempt more sophisticated analysis, and to augment the Place Standard with other sources, such as the qualities of successful places set out in Creating Places (Scottish Government, 2013)  , and the best practice case studies published by Architecture & Design Scotland and other agencies. Nominations for the RTPI Awards for Planning Excellence, the Scottish Awards for Quality in Planning and competitions organised by the RIBA, RIAS, the Civic Trust and others may also be valuable sources of evidence.
7.25 Planning seeks to create better places, and it is sensible for the impact evaluation effort to start there. Over time, the evaluation programme should also address the indirect impacts of planning, by examining its impact on, among others, health and wellbeing, productivity, learning, climate change resilience and other factors addressed by the National Outcomes. More work will be required to develop practical guidance for evaluation on these topics, but there is already an extensive literature on how good architecture and placemaking can contribute to wider policy goals. The Architecture & Design Scotland website ( www.ads.org.uk) provides access to an extensive body of research on topics including learning environments, building for wellbeing and health, sustainable design, innovation and culture. The Design Council CABE has a large archive of case studies on a similar range of topics.
Learning from experience
7.26 Together, monitoring and evaluation enable planners, policymakers and citizens to gain a better understanding of the planning system, its successes and failures. Scrutiny of planning authorities is part of the story, but a more holistic approach is needed in order to build a better understanding of the Scottish planning system. Is planning performing its core functions effectively by translating development plans into planning consents and completed developments – and by preventing developments that do not conform to the plan? Is it helping to create better places to live, work and play? Beyond this, is it exerting a beneficial influence on a wider array of economic, social and environmental policies? The planning performance framework should help everyone concerned with the planning system – and the wider public – to understand the contribution that planning is making to a wealthier, fairer, smarter, healthier, safer, stronger and greener Scotland. It should provide insights into what works and what doesn't, promoting a culture of learning, innovation and continuous improvement.
7.27 Figures 7-3 and 7-4 highlight some of the ways in which that culture of learning can be resourced and nurtured:
- the output/outcome monitoring system will generate reliable, standardised reports at both national and local level, enabling comparisons between areas and (over time) trends analysis
- a programme of better place evaluations will produce a series of authoritative reports which will provide a commentary on efforts to use the planning system not just to deliver development but enhance place quality, and insights into what works
- these reports will be complemented by another series of studies which will explore the wider strategic impact of planning on national and local policy goals.
7.28 We anticipate that the national planning performance coordinator (when appointed) will use these resources to stimulate debate among planning professionals, developers and community organisations. Among the opportunities identified by this study are:
- the development of best practice guidance and case studies in partnership with Architecture + Design Scotland and other agencies
- a website dedicated to planning, development and placemaking
- events and seminars to showcase the key messages and celebrate success
- professional masterclasses with the authors of evaluation studies
- events for community activists.
7.29 The opportunities, in terms of professional development and promoting an informed debate about the role of planning, are clear but there will need to be a concerted effort, championed by the Scottish Government, the Improvement Service and others, to ensure that the insights and learning generated by performance monitoring and evaluation are shared, understood and internalised. There needs to be a feedback loop so that the lessons learned from experience can shape future local and national planning policies. Communities will want to see evidence that planning authorities are learning from experience and doing things differently. Developers and project promoters will expect to see success stories highlighted and celebrated.
7.30 The proposal to move towards an outcomes-based performance management system for planning in Scotland has been warmly received, by planners and a wider circle of interested organisations and individuals. But a strong sense that "this needs to happen" is qualified by some concern about the resource implications. Planners are acutely aware of the pressures on local authority budgets for planning and development, as reported by the RTPI and the Institute for Fiscal Studies. Meanwhile, some developers are concerned that greater emphasis on monitoring outcomes might slow the system down ( KMA, 2017).
7.31 We have listened carefully to these concerns, which were reflected in the consultation interviews and at the workshop. We agree with those who said that moving to outcomes-based monitoring would represent a significant shift in the culture and practice of planning in Scotland, making the profession more open and accountable than previously. The consensus was that the benefits of change would outweigh the costs and we were encouraged by the enthusiasm of the practitioners we spoke to.
7.32 It was generally recognised that adopting a performance planning framework of the type described in Section 6 would inevitably entail some one-off transitional costs, but that the day-to-day operation of a system for monitoring planning outputs and outcomes (as described above) should be no more onerous than the present arrangements and might even deliver some modest savings. Once decisions about the detailed design of the monitoring system have been made, the likely transitional costs will include:
- design and commissioning of the national planning database
- production of guidance material
- delivery of staff training.
7.33 The basic output/outcome monitoring system described above will use data which is already collected by planning authorities, but which will be recorded in new ways, possibly on a national database which will provide a platform for reporting. Once the arrangements have been agreed they will be mandatory for all planning authorities, in line with the provisions of the Policy Memorandum (Scottish Government, 2017c).
7.34 Regular evaluations of the impact of planning would, by contrast, have significant resource implications. The development of guidance for impact evaluation will be a substantial task, and we would anticipate a series of launch events to introduce the new approach. Ministers will need to decide whether post hoc impact evaluations should be mandatory for planning authorities and, if so, how the consultancy costs will be met. The development of a (possibly 3-year) national evaluation programme might be the responsibility of the national planning performance coordinator. Depending on the scale of the project, consultants' fees are likely to be in the order of £15,000 - £50,000 per project. Evaluation studies should be carried out by independent experts but they are likely to require significant inputs from planning authority staff.
There is a problem
Thanks for your feedback