Maintaining the Momentum Towards Fair Access: annual report 2022

The fifth annual report of the Commissioner for Fair Access concludes that all indicators on the fair access scoreboard are flashing green, but Professor Scott warns that maintaining momentum could become more difficult following the damage done by interrupted schooling during the Covid-19 pandemic.

SIMD and Other Indicators

1. Introduction

From the start there has been a debate about whether progress towards fair access should be measured by an area-based indicator, such as SIMD, or an individual-related indicator, such as Free School Meals (FSMs), or by some combination of the two. I have considered the arguments for and against these different types of indicator in previous reports. But the debate remains far from settled, for reasons considered below.

There are several area-based and individual-related indicators.

  • In addition to SIMD, other area-based indicators include POLAR 4, a UK-wide measure which essentially ranks areas in terms of higher education participation; and a new measure of disadvantage based on the lowest level of published census data, which has been developed by HESA (SIMD and the HESA indicator are discussed in greater detail below).
  • In addition to FSMs there are also generic individual-based indicators such as socio-economic classification (SEC), which is derived from a classification of occupations; parental education, in effect whether parents are graduates, which is self-reported on UCAS application forms; and whether applicants have attended State or independent schools. There are also more focused indicators such as care experience.

At a national level SIMD has been used because those were the terms in which the First Minister originally set out her ambition - that by 2030 20 percent of (Scottish domiciled) higher education entrants should come from the 20-per-cent most deprived communities in Scotland. In practical terms SIMD is a straightforward measure. It is clear what is being measured, multiple deprivation across a range of fields, including health, housing and employment as well as education. These different forms of deprivation are strongly correlated - 'intersectional' in current academic language. So focusing on multiple deprivation makes good policy sense. Reliable data is also available, and SIMD is recalibrated at regular intervals to take account of changes in deprivation.

At an institutional level a range of indicators is used - SIMD, of course, to measure progress against national targets but also care experience; FSMs; attendance at (typically local) schools with low progression rates to higher education with which institutions have links; participation in SFC-funded access initiatives (such as the Schools for Higher Education Programme (SHEP), the Scottish Wider Access programme (SWAP) and the Access to High Demand Professions); participation in bridging programmes and summer schools; and others. Applicants with one or more of these characteristics are given guaranteed places or special consideration, and benefit from the minimum entry requirements now published by every institution at course level. It was never envisaged that institutions would use SIMD as the only measure for measuring progress towards fair access.

Seen in that light there does not need to be a choice between area-based and individual-related indicators. Both can - and are - used. The continuing debate, therefore, is narrower in scope. It boils down to this - should national targets continue to be calibrated solely in terms of SIMD, or should they be revised to include other indicators that relate to individuals (FSMs are most frequently mentioned in this context)? There may also be a second question - should national targets be abandoned, and replaced by targets set by institutions themselves using the indicators they believe are most relevant (and policed through outcome agreements negotiated with the SFC)? The latter is essentially the approach that has been taken in England initially by the Office for Fair Access (OFFA) and now the Office for Students (OfS) - not necessarily, of course, a reason for it to be rejected.

2. The case against SIMD

There are three main arguments against relying solely on SIMD.

  • The first is familiar - the problem of false-positives, applicants who are not themselves deprived but live in SIMD20 areas, and false-negatives, applicants who are deprived but do not live in SIMD20 areas. Understandably more attention has been focused on the second group because the effect of relying on SIMD could be to discriminate against some applicants who are genuinely deprived by rendering them 'invisible'. In practice, this is less likely to happen because all institutions use a basket of indicators. But it remains a risk.
  • The second is that SIMD areas vary widely, not in population size but in geographical extent. In urban areas they identify areas of concentrated multiple deprivation. Outside cities and larger towns, in particular in the Highlands and Islands and the Borders, deprivation is more widely distributed. Rural poverty is no less real than urban poverty because it is harder to identify. Later in this section the north east is considered in more detail, as a case-study.
  • The third, and least familiar - but possibly most powerful - reason is that reliance on SIMD as the only measure of progress towards fair access means that national targets have ceased to be stretching for a significant number of institutions. These include nearly all colleges (with regard to their higher education provision), and all 'post-1992' universities with the exception of Queen Margaret University and Robert Gordon University. All these institutions meet, or exceed, national benchmarks in terms of SIMD, either because of their geographical location or because they have traditionally recruited from a much wider section of the population, or both. The effect is that national targets are relevant to only a minority of institutions. The rest have a free pass. This effect will intensify as progress is made towards meeting these targets.

Each of these arguments needs to be unpacked in greater detail.

3. False positives and negatives

Every indicator, area-based or individual-related, carries a risk of producing false-positives and false-negatives.

For example, a FSMs indicator has to be based on take-up rather than eligibility, which excludes some disadvantaged young people. Even take-up figures are influenced by different approaches taken by local authorities and record keeping by schools and local authorities. Decisions must also be taken about which FSMs recipients should be included in the indicator. The conclusion reached by the Access Data Group - the work of which sadly has not been progressed - was that anyone who had received FSMs at any stage in their secondary education should be included. Although that minimises the danger of a significant number of false-negatives, there is still a risk of false-positives being included - in other words, applicants who are no longer deprived but were at an earlier stage in secondary school.

Finally, a FSM indicator treats all those included as if they were equally disadvantaged in terms of access to higher education. But it is well known that FSM recipients in some local authorities have much higher levels of higher education participation than recipients in others. There is an especially stark example in England where FSM recipients in the London Borough of Hackney are twice as likely to participate in higher education as those in Middlesbrough in north-east England. This confirms how much the characteristics of the community in which disadvantaged young people are brought up - and, in particular, levels of aspiration - matter.

However, the main criticism about indicators producing false-positives and false-negatives has been directed at area-based measures such as SIMD. This criticism has had two main elements:

Mismatch between deprived communities and disadvantaged individuals

This is true if SIMD is compared to FSMs. There are more local authorities (18) in which the percentage of S1-6 students receiving FSMs is greater than the percentage in SIMD20 areas, than there are local authorities (13) in which the percentage in SMD20 areas is greater than the percentage receiving FSMs. For example, in Glasgow City 56.2 percent of S1-6 students are in SIMD20 areas but only 34.7 percent are receiving FSMs. In contrast, in Orkney, Shetland and the Western Isles there are no S1-6 students in SIMD20 areas (because there are none in these local authority areas), but 5.5 percent, 7.5 percent and 11.1 percent respectively receiving FSMs. But this prima facie evidence of a serious mismatch between SIMD, an area-based measure, and FSMs, an individual-related measure, is not as conclusive as it appears. Most of Scotland's population is in the 13 local authorities that include Glasgow, Edinburgh, Dundee and other major urban areas - apart from Aberdeen. It also needs to be borne in mind once again that all FSM recipients are not equally disadvantaged in terms of higher education participation.

Differences between different area-based measures

Comparatively few Scottish students are included in the bottom quintile of POLAR 4, the UK-wide indicator used in England - essentially because Scotland has higher levels of participation in higher education. HESA has recently developed an experimental area-based measure based on the percentage of 16-year-olds and over with below level 4 education and the percentage of 16-74 year-olds in SEC 3-8 categories (essentially non-professional and managerial jobs) at the lowest level of census data - which in Scotland covers between 20 and 78 households. Because this indicator is based on census results which are 10 years old, it clearly cannot be used to replace more dynamic area-based indicators such as SIMD. But the differences between the two indicators are revealing. According to the HESA indicator, 16.5 percent of all students in the bottom quintile are in Glasgow, compared with 30.6 percent in SIMD20. In general 55.9 percent of SIMD20 students are in large urban areas, compared with only 33.8 percent in the bottom quintile of the HESA indicator. According to the latter indicator deprivation is more widely distributed, with students from smaller towns and rural areas featuring more prominently than in SIMD.

4. SIMD's big-city bias

This bias of SIMD to big cities is relatively easy to explain. As the intention of SIMD is to measure multiple deprivation in communities, it has seven different components, of which educational disadvantage is only one. Such deprivation is concentrated in large urban areas, although the individual components are more widely distributed. In contrast the HESA indicator has only two components - level of education and job types. The wider mesh of SIMD allows more areas to be identified as deprived. POLAR 4, of course, has only one component - participation in higher education. In other words, these different results reflect different methodologies.

Currently, of course, SIMD is the only indicator used to measure progress towards meeting the Government's targets on fair access - both the 2021 and 2026 interim targets and the 2030 final target at a whole-system level; and the target of at least 10 percent SIMD20 entrants for institutions. Although it has always been accepted that SIMD student shares would vary significantly between institutions, the 10-per-cent 'floor' that all institutions are expected to meet draws attention to those institutions that fail to meet it - even though no direct penalties are attached to failure.

The fact that the two universities have found it most difficult to recruit 10 percent of their students from SIMD are both in the north east - the University of Aberdeen and Robert Gordon University (and also that they are contrasting types of institution, an 'ancient' and a 'post-1992' university) has led to the conclusion that the problem lies with the measure being used, SIMD. This is reinforced by the fact that North East Scotland College with an impressive commitment to, and well developed policies on, reaching out to disadvantaged students, also has a low percentage of SIMD20 students, by college standards.

Two variables are important - the absolute number of SIMD20 students in schools in the region in which a higher education institution is located, which is determined by the number of SIMD20 areas in that region; and the proportion of local students that the institution recruits. Aberdeen and Robert Gordon are disadvantaged on both counts. The number of S1-6 SIMD20 students in Aberdeen and Aberdeenshire is low - 11.7 percent and 8.6 percent respectively. The numbers in adjacent local authority areas such as Angus or Moray are also low. As a result, both universities are over-dependent on recruiting SIMD20 students from other parts of Scotland, in practice the central belt. To some extent that has gone against their tradition as local recruiters.

As long as SIMD is the only measure of progress towards fair access this outcome is inevitable, because the majority of SIMD20 areas are in greater Glasgow, Edinburgh and Dundee. Also SIMD20 students are less willing to move far away from their homes to study than students from middle-class homes. As a result the two universities in Aberdeen are put under pressure to recruit SIMD20 students outside their region, which universities in other cities are not under. For example, Robert Gordon has a target of recruiting 240 SIMD20 students but expects only 50 to come from within its region. In contrast, universities in the west of Scotland are able to recruit the majority of their SIMD20 students locally. For example, the University of Glasgow is able to recruit more than 80 percent of its SIMD20 students locally.

This out-of-region recruitment is expensive - in terms of the direct cost of recruitment, guaranteed accommodation and additional support. It is also not necessarily in the best interests of students who may be better off studying closer to home. There is a wider argument that encouraging talented individuals to leave behind the deprived communities where they were brought up, which is one likely result of not studying locally, has the effect of further impoverishing these communities.

Would including FSMs alongside SIMD as a measure of institutional (not necessarily national) progress help? The answer must be that it would, but not decisively. In both Aberdeen and Aberdeenshire the number of S1-6 students on FSMs is below the national average of 17 percent. According to a calculation made by Robert Gordon University, the actual numbers were 312 and 325 respectively in 2020. Both SIMD20 and FSM students are in short supply. As a result the University believes that including FSMs would produce only a limited change. However, even if the absolute number of FSM recipients is modest, the number of schools with which universities worked would be increased by including FSMs alongside SIMD, potentially raising levels of aspiration and attainment as well as producing a bigger supply of potentially disadvantaged applicants. The University of Aberdeen believes that it would double the number of schools with which it engages.

5. Does SIMD give some institutions a free pass?

The third argument against relying exclusively on SIMD is that it gives most institutions a 'free pass' because all except five have already met the 10-per-cent target. The rest have met the 10-per-cent target, although five more are still below the 16-per-cent interim national target for 2021. Two more have already exceeded that target but still fall below the next interim national target of 18 percent in 2026. The remaining five have even met or exceeded the 20-per-cent final national target in 2030. The universities that fall into the various categories are listed in the table below

Institutional target

Interim national target (2021)

Interim national target (2026)

Final national target (2030)

Below 10%

Below 15%

Below 16%

Below 18%

20% or more


Robert Gordon



St Andrews




Rural College











This table highlights that for many institutions the 10-per-cent target has ceased to exert any direct pressure. Of course, this does not necessarily mean their commitment to fair access has been reduced. Despite having exceeded the 10-per-cent target for individual institutions, they continue to work hard to contribute to meeting the national targets, interim and final. However, as the table shows, some have more to do than others. The pressure imposed on institutions by targets varies accordingly. In one sense that does not matter. It is safe to assume that universities which already recruit a high proportion of SIMD20 entrants are strongly committed to wider access because they have always recruited significant numbers of disadvantaged students. Also all higher education institutions have set out their ambitions for fair access in their outcome agreements with the SFC. But these are individualised ambitions, alongside a range of other ambitions (for example, on research performance), which by their nature are not directly enforceable.

However, it does call into question the usefulness of retaining the institutional target at its current 10-per-cent level. There are three options:

  • To abandon the institutional target completely and rely solely on national targets to drive forward progress on fair access.
  • To increase the institutional target, which was envisaged by the Commission on Widening Access. But even a 15-per-cent target would still exclude the majority of universities, while further detracting from the credibility of SIMD as a measure of disadvantage at the institutional level.
  • To recalibrate the institutional target - either by adding in an individual based measure such as FSMs, or by relying on the ambitions set out by institutions in their outcome agreements to monitor progress.

6. Conclusions

Despite misgivings about SIMD no indicator is entirely satisfactory as a measure of disadvantage. But some indicator, or basket of indicators, is needed if targets are to be used to drive forward progress towards desirable public policy goals (and there is no serious dissent from the belief that fair access to higher education is a public policy goal).

The choice between available indicators is not simply a technical one. It is also a political, and conceptual, one.

  • For example, the technical effect of replacing SIMD with FSMs, or producing a composite SIMD-FSMs measure, as the main indicator of progress towards fair access would make it (a little) easier for Aberdeen and Robert Gordon to meet their targets but conversely would make it more difficult for universities like Glasgow and Strathclyde. Would such a rebalancing be in the interests of fair access to higher education in the light of past institutional commitment? Another practical effect could be to allow all institutions to meet their targets more easily. Again, in the wake of the disruption produced by Covid and the light of future reductions in family income, and the challenge of meeting the 2026 and 2030 targets, is this the time to ease the pressure?
  • But replacing SIMD with FSMs, or even producing a composite measure, would also have a wider conceptual effect by tending to confirm the view that the aim of fair access was to provide greater opportunities for talented, but disadvantaged, applicants to go on to higher education, with universities being required to make only those adjustments necessary to ease their path. There is an alternative view - to see fair access to higher education as one element in wider strategies to address multiple forms of deprivation that are deeply entrenched, community-rooted and multi-generational.

However, the credibility of SIMD has been sufficiently questioned to make it difficult to retain it as the only official measure of progress towards fair access. The best course, therefore, is a middle one, in effect a twin-track approach - to maintain national targets expressed in terms of SIMD but allow greater flexibility in the indicators of institutional progress. This would mean that single-measure institutional targets would be abandoned because they no longer serve any useful purpose. Instead the SFC would be expected to negotiate fair access targets with institutions as part of the negotiation of outcome agreements, which would be both stretching and enforceable. This would formalise the different approaches already taken at national and institutional level, the former focusing on a single measure and the latter using a range of indicators. To ensure transparency the SFC could be required to publish an annual red-amber-green scorecard indicating its assessment of the progress made by institutions against their agreed targets.

Recommendation 2

National targets on fair access should continue to be defined in terms of SIMD. But institutional SIMD targets are no longer fit-for-purpose. Instead institutions should be able to use their own basket of measures to determine their own targets. But these new targets should be strictly policed by the SFC through outcome agreements.



Back to top