3. Are impact assessments carried out as expected?
The assessments discussed in this report are all 'required', but are they actually carried out? Are they carried out at a time that still allows them to inform and influence the policy? And are they more than just a formality? This section considers these points in turn.
It has been difficult to find much material for this section, and particularly difficult to find balanced views: it is much easier to criticise things than to praise them, with academics being particular culprits.
3.1 Are the assessments carried out at all?
The limited literature on this topic suggests that, although plenty of assessments are being carried out, many are limited at best, or not carried out for those policies that need it the most. However, much of the academic literature on the use of impact assessments is quite dated meaning that it is difficult to get an up-to-date impression of levels of compliance.
In Ireland, over 200 bills were published between March 2011 and December 2015. RIAs were prepared for nearly half of those bills and more than 93% of those RIAs were published on government websites, although it was not always easy to find them (Ferris, 2016). In the UK, between 2002 and 2004, compliance with the RIA process ranged between 92% and 100% (Sadler, 2005). In the Netherlands, Roggeband and Verloo (2006) describe the application of gender impact assessments as random and their success as relative: at least 22 policies were subject to gender impact assessment between 1994 and 2004, but this was only a small proportion of the hundreds of policies that were developed each year.
In Sweden, almost 10,000 RIAs were carried out over five years in the mid-2000s, but in only a few cases were these presented as separate and discernible documents, and the overwhelming majority were just a few sentences long (Erlandsson, 2008). The EOHSP (2007) also found that not all HIAs are documented, especially in countries such as Sweden and Finland, where HIA are integrated in regular decision-making at the local level. More recently, Nerhagen and Forsstedt (2016) noted that, in Sweden, several agencies are responsible for policy-making, leading to a fragmented RIA system with unclear requirements for cost-benefit assessment; and that institutional and organisational changes over time disrupt policy development and RIA. In New Zealand and Quebec, government departments other than for health were reluctant to have a health-sector vision imposed on them, nor were they willing to internalise an HIA process by developing in-house skills (Morgan, 2008; Molnar et al., 2016). Morgan (2008) postulates that this might be because the departments saw HIA as the start of an invidious trend, with other social policy areas potentially following on to also demand a say in policy development. More positively, in Ireland a combination of legal challenges against poor SEAs and an effective 'SEA champion' in the form of the Environmental Protection Agency has led to an increase in the sectors engaged in SEA and a greater openness to the process (EPA, 2020).
In Ireland, there was a "high level of formal compliance" with poverty proofing procedures (Office of Social Inclusion, 2006), but Johnston and O'Brien (2000) found that poverty proofing seemed to be applied primarily to policies that were themselves designed to reduce poverty, whereas the point of the exercise is to assess those policies that might not have an obvious impact on poverty. The EOHSP (2007) also noted that "HIAs conducted on an ad hoc basis may sometimes be affected by opportunistic politics. It may be argued that the HIA was only initiated because the expected outcome would support the pending decision." Johnston and O'Brien (2000) found that some reports simply stated that 'the impact on those in poverty would be positive', but that this is not a clear indication of poverty proofing as legally required.
In Ireland, no regular list of regulations and RIAs is published (Ferris, 2017), and the same seems to hold true for most other countries and impact assessment types. A few notable exceptions are Scotland's SEA Gateway which publishes a comprehensive list of SEA reports and the US Environmental Protection Agency's environmental impact statement database.
3.2 Are the assessments carried out at a time when they can inform the policy development?
The late timing of impact assessment, after key policy decisions had been made, was highlighted as a problem in a number of studies, although reference to the actual timing of impact assessments (as opposed to the time when impact assessments should be carried out) was sparse.
Dunlop et al. (2012) suggest that a combination of RIA starting early and the use of outside consultants is particularly important in improving the understanding of cause-and-effect mechanisms that underpin policy issues. EOHSP (2007) found that some HIA results were not conveyed to the decision-makers because the assessment was not completed on time, and so did not inform the decision. Radaelli (2009a) noted that, for UK RIAs carried out 2005-2007, "systematic analysis of alternative options is often lacking giving the impression that most RIAs started at a late stage, with one option already chosen outside the RIA". Turnpenny et al. (2009) found that RIAs in the EU, Germany, Sweden and the UK tended to be started late in policy formulation, and the timing of SEAs in the Netherlands is an 'area for improvement' (van Dreumel, 2005). The OECD (2020) found that the Dutch RIA template tends to be completed late in policy process, and that there is little room to consider alternatives (OECD, 2020)
3.3 Are the assessments more than a formality?
The literature – much of it in (the more critical) peer-reviewed academic journals – suggests that impact assessments are often carried out in a minimal fashion, and are not achieving their aims. Both OECD (2009, 2011) for RIA and Monteiro et al. (2018) for SEA describe this as a 'gap' between the ideals of the impact assessment and what it actually does.
For instance, the EOHSP (2007) identified four types of effectiveness when assessing the effectiveness of HIAs, based on whether either or both of two outcomes was achieved: 1. plan changes as a result of the HIA information, and 2. explicit acknowledgement of the HIA in the planning decision (see Figure 3.1). HIA is only clearly effective when both of these occur (direct effectiveness). Of the 54 policy HIAs the EOHSP reviewed, only half were reported to decision-makers.
Radaelli (2009a) is also cynical about international RIA practice, distinguishing between impact assessments that
- improve the efficiency and effectiveness of policy in the way intended
- are used to increase policy legitimacy, but not to improve the policy
- are used to 'stack the deck' (increase core executive control on the regulators), tweaked to support broad existing policy trajectories, or stripped down and used as a symbolic signal.
|Modification of pending decisions according to health/equity/community aspects and inputs|
|Health/equity/ community adequately acknowledged||Yes|| Direct effectiveness
|| General effectiveness
|No|| Opportunistic effectiveness
|| No effectiveness
Although Radaelli (2009a) does not discuss the proportion of RIAs that fall into each category, Dunlop et al. (2012) do. Analysing 31 RIAs from the EU and UK, they distinguished between 'instrumental' use of RIA to enhance understanding of mechanisms that underpin the policy issue; 'communicative' use to provide consultees with information on the impact of the policy proposal; 'political' use to control bureaucracy or handle conflict; and 'perfunctory' use to water down or not implement RIA. Of 31 RIAs that they analysed, 13 exhibited instrumental uses, 5 exhibited communicative uses, 13 exhibited political use, and 16 exhibited perfunctory use. Turnpenny et al. (2009) also suggest that the knowledge produced by impact assessments is often little used in policy-making and, when it is, it is often used to bolster political positions or justify decisions already taken
In the Netherlands, interviews of civil servants showed that, although some felt that gender impact assessments were an 'eye-opener' and improved their policy proposal, more often the assessment results caused resentment, irritation and resistance. Gender mainstreaming was felt not to be a policy priority (Roggeband and Verloo, 2006). In Sweden, the ministry in charge of a given issue frames the problem, and sets the policy directions and key priorities in advance, and there is heavy emphasis on gaining political consensus, meaning that knowledge from RIAs has difficulty 'creeping in' (Hertin et al., 2009). Turnpenny et al. (2009) suggest that RIA is viewed across all jurisdictions as a 'largely irrelevant formality'.
An analysis of 47 HIAs in Australia and New Zealand (Harris et al., 2013) found that many HIAs led to real changes on the ground (see Sec. 4), but some exhibited 'opportunistic effectiveness' in that they were carried out with the intention that they should support a decision already substantively taken, or that only elements of the HIA that supported the decision were taken up while other more challenging recommendations were ignored. In other cases, HIA recommendations were not accepted or ignored, or the HIA recommendations were ineffective. Owens et al. (2004) also note that the output of assessments can be invoked or even manipulated in order to rationalise decisions already taken on the ground: "The problem – well rehearsed – is that ethical and political choices masquerade as technical judgments, reinforcing prevailing norms and existing structures of power".
The OECD (2009) also found that officials' analyses of the economic costs and benefits of their regulatory proposals tended to be incomplete, "a 'check the box' approach that does not seriously influence policy development". They concluded that "The lack of full analysis in RIA appears sufficiently widespread to be a fundamental constraint on realising the full benefits of RIA". An analysis of rural proofing practice in Ireland (Sherry and Shortall, 2019) also suggests that much of it involves 'box ticking'. Hilding-Rydevik et al. (2011) suggest that mixed messages from government might be a key factor in this: on the one hand, government suggests that impact assessment will result in major benefits, but on the other hand it expects no significant change to existing practice and no increased cost. Roggeband and Verloo (2006) note a similar tension in gender mainstreaming, where states have an official commitment to gender equality as a political goal, but also are de facto agents of gender inequality.
Based on Swedish and Dutch practice, Monteiro et al. (2018) blame the lack of SEA effectiveness on the SEA process itself: the capacity gap between how SEA is intended to work and how it works in practice "reflects the lack of adjustment of formal SEA model requirements in relation to the need to fit for purpose in specific governance contexts…". In particular, Monteiro et al. (2018) note that the 'capacity gap' seems to occur where countries – for instance China and Vietnam – have imported impact assessment models without having the underlying base of experts, tradition of public participation, up-to-date data, or government flexibility and transparency.
In Wales, the absence of data about future trends has meant that "the majority of [wellbeing] assessments provided little insight into future trends or multi-generational policy challenges. Some [assessors] questioned the validity and value of focusing on the future, describing it as an 'inexact science' whereas others were very vague about their approach to long-term planning and forecasting" (Future Generations Commissioner for Wales, 2017).
It is much easier to refute an argument than to make it: making an argument means showing that the argument works in all cases, whilst refuting it requires only one example of the argument not working. The fact that many impact assessments are not carried out, are carried out too late, or are superficial does not mean that they should not be carried out. Rather, as Monteiro et al. (2018) note, it begs the question of whether the impact assessment process is fit for purpose: whether policy makers understand what the purpose of impact assessment is, whether they have the capacity and resources to carry out assessments, etc.
Other reasons for limited impact assessment effectiveness include an emphasis on political consensus, the perception that impact assessments reduce a ministry's flexibility and control, concern about 'assessment creep', and late timing of the assessment.
Unfortunately, missing or poor quality impact assessments can have broader repercussions than simply the policy not being improved: it can send out the message that the topic is not important enough to be adequately considered. For instance, Osborne et al. (1999) note, about a poorly done Policy Appraisal and Fair Treatment assessment (PAFT) in Northern Ireland:
"Protestant mistrust is now added to Catholic mistrust. A PAFT initiative, which is only partly adopted, is likely to be particularly damaging politically. It is in danger of being seen by both sides of the community as a gesture and not a fully incorporated dimension of policy."