6. Preconditions for effective impact assessment
6.1 Literature findings
The literature provides much information about the preconditions for effective assessment. They are listed here in roughly the order of importance suggested by the literature.
Supportive organisations, high-level commitment: This is key, and is cited in a range of studies (e.g. National Women's Council of Ireland, 2017; OECD, 2012, 2019; Pinto et al., 2015). Supportive organisations for SEA include the Netherlands Commission for Environmental Assessment which reviews SEAs and prepares advisory reports; the Irish Environmental Protection Agency whose SEA division is led by a strong, vocal and effective 'SEA champion'; the Scottish Government's SEA Gateway; and the Future Generations Commissioner for Wales. For HIA, the Welsh HIA support unit, the London Health Observatory, the Institute for Public Health in Ireland and the Intersectoral Policy Office in the Netherlands are all cited (Pinto et al., 2015; Walpita and Green, 2000).
Harris et al. (2013) note that
"The individuals involved in the [HIA] process had a significant influence on both the process and outcomes of the HIA. There were two main facets to this: direct involvement of the right people and at the right level… The right people often have the power to either make or influence decisions. Interestingly the right level is generally not the highest level of decision-making [but] are often at senior management level. They have some, but not ultimate, power, understand the system well, often have pre-existing relationships that they can utilise, and are often in a position to influence the implementation of recommendations".
In the Netherlands as well, Verloo (2007) noted that gender impact assessment has "commitment of the political top 'in theory', but lack of commitment of civil servants". As long as civil servants fail to see how impact assessment enhances the quality or status of their work, "they remain more a part of the gender problem than its solution". In its early days, gender impact assessment was supported by a pioneering equality officer, but when that officer left, not much more progress was made (Roggeband and Verloo, 2006).
Expectation that impact assessment will lead to change, willingness to learn and experiment:
Kearns and Pursell (2011), writing about HIA in Ireland, highlight how organisational culture inhibits radical, strategic, and long-term change through a complex arrangement of beliefs, paradigms, cultural codes and knowledge. "In the absence of addressing the unique cultures and sub-cultures within organisations, the 'deep structure' of basic values and beliefs inhibit anything but marginal change from occurring" (Keans and Pursell, 2011). Nykvist and Nilssen (2009) also refer to 'institutional lock-in' and inertia for Swedish sustainability appraisals: "Doing things out of custom constitutes not only an important meso level constraint (through existing cultures, most commonly included consultants and agencies, and advice from other colleagues etc.) but also a micro level institutional constraint (on a personal level due to previous experience, professional background etc.)" (Nykvist and Nilssen, 2009).
In the Netherlands, gender impact assessment was perceived as a threat to policy-makers' position and vision. In response, the Department for the Coordination of Equality Policy aimed to make gender impact assessment look less 'dangerous and threatening' by simplifying the process. This was criticised by academics who felt that this nuanced approach did not challenge the prevailing culture enough: "In a certain way, this could also mean that the DCE officials did not have a deep understanding of the complexities of their own position and the position of their bureaucrat colleagues, and were trying to dismantle the master's house with the agreement of the master" (Roggeband and Verloo, 2006).
In contrast, the Welsh Audit Office (2018) clearly noticed this inertia after the first round of well-being assessments, and challenged it:
"Are we going to rely too much on the past and not think through what we need to do to radically change, to develop new ways of approaching the aims and goals of the legislation?… Don't expect from the auditor, or from Sophie, a clear 'this is how to do it' - so you can go away and tick the boxes. It's not like that. It is, however, a journey in which I'm engaged, you're engaged and Sophie is engaged… This Act is just what was needed to unsettle the status quo, ruffle a few feathers, and bring public services back to the purpose they were set up for in the first place – to improve the lives and well-being of people here in Wales, today, and for every tomorrow to come".
The Future Generations Commissioner for Wales (2017) echoed this: "it appears that [Public Service Boards] are 'playing safe' in how they are approaching well-being, and not yet taking the opportunity to challenge 'business as usual' approaches. This level of challenge will be essential to combat entrenched mind-sets and ways of working, and to enable new approaches and perspectives to be developed".
Oversight, quality assessment: An OECD (2020) review of the Dutch RIA system noted that "An effective oversight function is critical to ensuring high quality evidence based decision making and enhancing the impact of RIA frameworks". The Dutch Regulatory Reform Group essentially acts as a mechanism of oversight based on anticipated reactions: knowing that their proposals will be scrutinised at a high level, departments do not send in draft policies unless there is confidence in the quality of RIA, since they do not want to be named and shamed by a negative opinion (Radaelli, 2009a).
In Ireland, Ferris (2016) calls for a RIA 'gatekeeper' who can assess the quality of individual RIAs and challenge proposals that are not accompanied by satisfactory assessments. He cites the EU's Regulatory Scrutiny Board as a good model for this.
In New Zealand, where there are no similar procedures for ensuring accountability, Kupiec et al. (2015) recommend that a quality assessment should be prepared for each RIA, preferably by external experts rather than internally by the agencies. At the project and plan level, in some countries (e.g. Ireland, the UK) the potential for the public to legally challenge an inadequate impact assessment plays an oversight-type role, by putting strong financial and reputational pressure on planners and consultants to prepare an adequate assessment (Gonzalez and Therivel, 2020). However we found no similar evidence at the policy level.
Timing: Starting an impact assessment early in policy-making is not a guarantee of effectiveness, but starting it late in policy-making is an almost-guarantee of ineffectiveness. This has been stated by multiple sources, including Gray et al. (2011) for HIA in multiple countries, EOHSP (2007) for HIA in Wales, Ward (2006) for HIA in New Zealand, Roggeband and Verloo (2006) for gender impact assessment in the Netherlands, Ferria (2016) for RIA in Ireland; and van Buren and Nooteboom (2009) for SEA in the Netherlands.
Fitting the assessment results to the decision: Impact assessments cannot influence policy if they are a separate process, not fitted and adapted to the relevant policy decision. Monteiro et al. (2018) and Mahoney and Morgan (2001) argue that impact assessment processes cannot be adopted from elsewhere – or indeed from a different 'scale' (plan v. policy, national v. local) without adapting them to the context where they will be applied. Bekkers (2007) notes, in the context of Dutch HIAs, that impact assessments have to be useful to policy-makers: "policy-makers' frames of the usefulness, feasibility and acceptability of the proposed alternatives are the main conditions for policy change". After interviewing 14 chairs and/or secretaries of Swedish Committees of Inquiry, which have a key role in policy formulation, Nykvist and Nilssen (2009) conclude that the message of the impact assessment must be carefully framed. Two of their interviewees observed that:
"A large proportion is a pedagogic presentation. If you could make the results understandable to a common politician or official…" (Interviewee Case D)
"I feel that the academic community, for as long as I can remember, when the dissertation is completed, they never move to the next phase, that is, to market this new knowledge..." (Interviewee Case C)".
Nykvist and Nilssen (2009) note that the use of impact assessments to inform policy "is inevitably a role of advocacy": impact assessment faces a delicate balance between providing objective knowledge and making acceptable and applicable recommendations. "No matter how broad and evenly weighted assessment between economic, environmental, and social impacts a method or framework for appraisal of sustainable development is designed—it will still be viewed as advocacy, rather than an objective decision support system" (Nykvist and Nilssen, 2009). Turnpenny et al. (2009) and Owens et al. (2004) similarly describe impact assessment as an inherently political exercise rather than an objective activity, and suggest that coercion (e.g. legal requirements and increasingly prescriptive guidance on impact assessment) is unlikely to make policy assessment more integrated into policy making.
Public/stakeholder involvement: Participation of a wide group of stakeholders has been cited by multiple sources as being key to impact assessment effectiveness: "the HIA, the participatory elements involved, and the communication strategies were a precondition to move out of a situation of long-standing political stalemate" (EOHSP, 2017). EPA (2016) suggests the same for US environmental justice work; and Hess and Satcher (2019) found that factors other than impact assessment – for instance alliances between local residents and local/state agencies and civil disobedience - were likely to be more effective in providing remediation from high-impact large-scale development projects than the impact assessments themselves (Hess and Satcher, 2019).
Gray et al. (2011) and Mahoney and Morgan (2001) note that community engagement – not just the engagement of key stakeholders and powerful sections of the community - is critical in the success of HIA. However, Helbig et al. (2015), based on five international case studies, noted that effective stakeholder engagement requires a nuanced understanding of who the relevant stakeholders are, and a judgement about which stakeholders represent particular aspects of or viewpoints on a complex problem:
"Despite the common rhetoric of 'citizen' participation, the cases show how it is often impractical to engage members of the public or representatives of the full range of relevant stakeholders. In these situations, policy modelers and policy makers needed to appreciate the limitations of stakeholder engagement and aim for results that take advantage of less-than-complete stakeholder participation".
They also found that, in order to participate in meaningful ways, stakeholders needed to be educated about the purpose of their participation, the processes and tools to be used, and how their input would be taken into account (Helbig et al., 2015).
Funding/resourcing: For impact assessments to be carried out well, they must be adequately resourced, and given enough time to be carried out. Harris et al. (2013) describe the positive influence of a conjunction of factors: "the time was right, time was available, the opportunity was recognised, the right person was available, the HIA fitted into existing work, funding was available". However, Harris-Roxas et al. (2012) note that the resourcing of HIAs "remains a challenging practical issue… An implied rationale for the application of HIA is often economic – that it is better to invest in preventing health problems now rather than 'paying a larger bill later'… HIA requires resources and has to be detailed to be credible, but also has to be responsive to decision-making and budgetary requirements" (Harris-Roxas et al., 2012).
Several studies (Hilding-Rydevik and Akerskog, 2011; Knutsson and Linell, 2010; Pinto et al., 2015; Turnpenny et al., 2009; WHO, 2018) suggest that little or no specific funding is allocated for impact assessments: they are done 'on top of the day job' and therefore the extent to which planners are able to fully embrace the work is limited. For New Zealand, where SEA is not legally required, "introducing yet another process into what is already a complex regional planning process would not be welcome by an authority with stretched resources" (McGimpsey and Morgan, 2013). In Ireland, "to some civil servants [Policy Appraisal and Fair Treatment] is just another scheme which they have to implement. It was noted that civil servants were suffering from initiative fatigue because of the number of changes that had taken place in the civil service in the last decade" (Osborne et al., 1999).
Adequate data and expertise: Lack of appropriate data (see also 'follow up' below) can restrict the effectiveness of impact assessment. For instance, the Irish EPA (2015) noted that HIA alone can require understanding of sampling, analysis, fate and transport of chemicals within the environment; quantitative exposure assessment methods; epidemiology; toxicology; public health; impact assessment techniques; community relations and stakeholder engagement; and regulatory and policy analysis. Example of data and expertise gaps identified in the literature are:
- Information on the rural impact of government policies, for instance policies that impact on micro businesses, health care reform, and energy and planning policies, all of which could have a significant impact on rural groups (Sherry and Shortall, 2019). "This raises the question as to whether sufficient evidence exists for departments to do more ambitious rural proofing. In some cases there just clearly isn't any evidence" (Rural Community Policy Unit, 2014).
- Information on people's lived experiences and well-being: "Before well-being plans are set, work should be undertaken to 'dig deeper' into data to better understand the causes and effects of key issues and trends, in relation to both community well-being and individual well-being" (Future Generations Commissioner for Wales, 2017).
- Gender expertise in government departments in the Netherlands (Verloo, 2007).
Harris-Roxas et al. (2012), WHO (2018) and Iglesias-Merchan and Dominguez-Ares (2020) all call for a more robust evidence base for HIA, and greater use of health professionals. In contrast, Mahoney and Morgan (2001) suggest that, particularly at the policy level, there may be little quantitative evidence to support HIA, and that the HIA community needs to better legitimise qualitative assessments.
OECD (2020) notes that there is much established RIA expertise across Dutch government departments which could be better drawn on, and that RIA training should be systematically provided to policy officials to encourage the development of expertise in evidence-based policy-making. Roggeband and Verloo (2006) suggest that expertise must constantly be refreshed, as the constant movement of top bureaucrats between ministries leads to a loss of institutional memory.
Collaboration and information sharing: This is particularly important at the start of a new impact assessment requirement, so that organisations responsible for carrying out the assessments can exchange good practice, learn from each other, etc. The WHO (2018) found that 31% of respondents to an online questionnaire about HIA stated that collaborative partnership with other sectors facilitates the further integration of health into SEAs, with only adjustments to HIA guidelines receiving a higher score. Radaelli (2009a) decried the lack of a professional RIA community in the Netherlands and UK, stating that addressing this could improve the nature and content of RIAs.
To this end, the Irish Environmental Protection Agency has set up an SEA Forum which runs quarterly workshops to build capacity and allow information sharing between SEA practitioners. In the Netherlands, interdepartmental forums at different levels of government facilitate cross-government co-ordination and discuss issues and improvements regarding the RIA process (OECD, 2020). The publication of impact assessments by, for example, the Scottish SEA Gateway and US government departments also allows information sharing, dissemination of good practice and efficiency.
Follow up: Monitoring is needed of the actual impacts of policies, and of how impact assessment findings have been integrated into policies: why were the recommendations of an impact assessment integrated or omitted? what mitigation measures work in practice? why are actual effects different from those that were predicted? (Ferris, 2016; Gonzalez and Therivel, 2020; Iglesias-Marchan and Dominguez-Ares, 2020)
However the literature suggests that minimal monitoring, at best, is carried out across impact assessment types. For instance in Sweden, there is no follow-up of regulations, except as a precursor to changing a law (van der Sluijs, 2017). Irish SEAs must include a statement about monitoring, but there are few or no links of monitoring data back to the next round of SEA (Gonzalez et al., 2019). In New Zealand an RIA should contain a plan of evaluation and monitoring, but there are no procedures of control, reporting and accountability (Kupiec et al. 2015).
There are good examples of impact assessment practice, such as the Welsh wellbeing assessments, and it is clear the impact assessments increase policy-makers' knowledge and awareness. The preconditions for effective impact assessment highlighted in the literature seem to fall into two categories: process and behaviour/culture:
- Starting the assessment early in policy formulation
- Providing adequate resources
- Starting the assessment early in the policy-making process
- Quality review
Behaviour and culture:
- Commitment to considering the assessment findings
- Involving others
- Sharing information
- Orienting the assessment findings to the policy decision
The literature suggests that good process alone cannot lead to effective impact assessment: a change in behaviour and culture is also necessary.
There is a problem
Thanks for your feedback