National Qualifications experience 2020: rapid review

Professor Mark Priestley of the University of Stirling was commissioned by the Scottish Government to lead an independent review of the processes through which National Qualifications were awarded in 2020 after exams were cancelled due to the coronavirus pandemic.


Discussion

We commence this section of the report by reiterating the extremely challenging conditions under which the ACM was developed, implemented and subsequently received. This is the majority view of respondents, and such sentiments prefaced most panel discussions. Moreover, while this review has made critical observations about aspects of the process, it is not our intention to apportion blame; instead we see the review as an opportunity to offer constructive criticism which will inform future responses to the ongoing COVID-19 pandemic, and especially to ensure that young people undergo a consistent, rigorous and [above all] equitable approach to the award of qualifications in 2021. Award of qualifications in such circumstances is clearly a whole system issues, and requires whole system responses. We also preface our observations here with the following points:

  • Young people, their families and teachers and lecturers deserve as much certainty as can be reasonably given in the face of an uncertain set of circumstances. This entails clear and transparent communication as soon as possible about the arrangements for 2021, and the rapid development of appropriate support and systems for making them happen.
  • It has been communicated very clearly to us – by head teachers, teachers and local authorities – that it will not be possible to both prepare young people for examinations and work comprehensively to generate evidence to be used if they cannot run. This is a case of either/or – but not both. The general view that we need to prioritise a focus on the rigorous evidence base.

The development of the ACM required the establishment of quite different approaches to those employed normally – moving into unknown territory. It was clear that centre estimation would be needed as the foundation for awarding, and that some form of national moderation would be needed, given the historical issues of accuracy with predicted grades – exacerbated in this case by a lack of access to the full range of evidence and under-developed systems for local moderation that could not be easily set up in the available timeframes. This combination of factors created the 'impossible situation' described by many respondents. Within these constraints, a coherent approach was developed enabled the award of qualifications to proceed – and we note here that in 75% of cases, estimates submitted by centres were not adjusted.

That said, we believe that certain decisions could have been taken differently, and that this may have led to different outcomes, and prevented the subsequent negative reaction that led to this review. These decisions relate to the following:

  • A greater recourse to partnership working in the early stages to develop the ACM. It is clear from our evidence that such working was on offer, but that it was not taken up by SQA.
  • Greater transparency, as requested repeatedly (e.g. by the Scottish Parliament Education and Skills Committee) around the moderation system and its implications.
  • A different presentation of the PCR as an integral part of the awarding process, rather than as a bolt-on appeals process (as is the case usually). It is worth reflecting here on how the use of different terminology might shape perceptions of this phase of any future ACM. We also note here the potential for inequity in a system that intentionally puts large numbers of candidates through a post-award process with impacts (as noted in this report) on transitions.
  • A greater level of embeddedness of equalities impact assessments in the development of awards systems, at the outset.
  • Greater levels of cooperation between agencies, including between the SQA and Scottish/local government, for purposes including analysis of data and national moderation.
  • More systemic engagement with young people, as stakeholders and as rights holders, to inform the development of systems.

In reviewing the 2020 award of National Qualifications, we have engaged with a very wide range of respondents, offering their perceptions of the process and sharing their experiences. We have also reviewed a wide range of written evidence. This has allowed us to form views on the development and application of the ACM, and has informed the recommendations we make in the final section of the report. We conclude this Discussion section with some observations.

First, we see a lack of appreciation, by key bodies throughout the process, that the issue of perceived fairness to individuals might become a toxic political issue if not handled with sensitivity and forethought. This has been exacerbated by the lack of clear processes for: 1] embedding thinking about equalities into the initial design of the ACM; 2] limited engagement in collaborative decision making and co-construction at the outset in the development of the model; and 3] a lack of targeted analysis of emerging data trends at key points in the process (compounded by a lack of equalities data at SQA and data-sharing agreements to permit closer working between the government and SQA).

One of the core issues emerging from this review is the apparent focus on the primacy of preserving previous years' distributions. A statement from SQA in one our panel discussions would seem to reinforce this view:

At the end of the day the bigger picture is preserving the value of the certificate (SQA panel interview).

This concern seemed, in the view of many respondents, to override the other two principles (Fairness to all learners and Safe and secure certification), meaning that, once the estimates arrived at SQA at the end of May, insufficient attention was paid to the impact on individuals. For example, one head teacher stated that of the three principles, the focus was more on system integrity, and less on young people – and that this is wrong (head teacher panel).

We are not arguing here against the idea that national moderation necessary; quite the converse, in fact. However, in our view, the main problem with the specific approach to the moderation was that the task of maintaining integrity and credibility of the qualification system was treated as largely technical exercise that aimed to fit the shape of this year's estimates' distribution into the shape of the historical grade distribution. To achieve this, the procedure was developed that moved 'entries' (neglecting the fact that 'entries' weren't just figures but represented real people) down the grade scale until the optimal distribution was achieved. Therefore, we would like to shift here the attention from 'how suitable the algorithm was for the task' to whether the task was operationalised in a valid way. Does a shape of the distribution that follows the historical patterns deliver fairness to individual learners and ensures that their grades reflect their effort and achievement? We do not think so. In fact, there was no way to achieve this task, because the statistical procedure did not use any information whatsoever about the individual candidates. So the main question here relates to what the moderation algorithm was supposed to do; and to do what it needs to do the algorithm needs adequate input (data). The algorithm does not 'care' that the data are individuals; it would move the data around until the 'optimal' distribution was achieved. But the solution is 'optimal' only in terms of total distribution, not because it reflects any attributes of individual learners; therefore, the solution could be unstable in terms of individuals. This is why there should have been adequate procedures for sense-checking of data at the level of centres and even subjects (e.g. analysis of data to identify outliers and anomalies), and manual adjustments based upon the qualitative information in the system (e.g. local authority rationales for variance). The appeals process provided a technical solution to this, but one limited by the resources needed to undertake massive members of appeals; more especially, it failed to account for the very real impacts on those large numbers of young people, including impact on mental health and wellbeing, and negative outcomes in relation to transitions to Higher Education.

We welcome the action by SQA to provide mitigation for the 2021 qualifications diet, and suggest strongly that arrangements are published as quickly as is possible, to obviate concerns in schools, where teaching of courses is already well-developed. This is essential to remove uncertainty and restore teacher and student confidence in the system. We also note that what is necessary this year will not be the same as long term consideration of the future of qualifications in Scotland. Nevertheless, we have some concerns about the draft proposals published in August, and the revised document due for publication at the time of writing.

  • The proposals appear to be premised on an assumption that the examinations in 2021 will proceed as planned (and therefore seek to reduce the assessment burden to compensate for missed teaching and learning). This is by no means a given. There seems to be little consideration of the need to create a robust evidence base in the event that exams are not possible, and estimation once more becomes necessary. The removal of coursework components in many subjects will further erode the existing evidence base.
  • The review has uncovered concerns that the proposals will lead to a narrowing of courses, with significant implications for education. Related to this, it has been communicated to us that the proposals may impact negatively on attainment, particularly for disadvantaged students who might perform better in coursework. Several respondents have suggested that the issue with the divergence of estimates and historical performance this year may not be due entirely to inaccurate estimation by centres (as SQA have consistently stated), but instead may also be influenced by a combination of recent policies to close the attainment gap and a possibility that teacher estimation actually provides a more accurate assessment of achievement than exams (which are said to disadvantage some learners).
  • The SQA proposals have been criticised for offering a piecemeal approach, which differs from subject to subject. While there is some merit in addressing the contextual nuances of different subjects, there is also considerable merit in a set of proposals that offer a more holistic approach across the system. The BOCSH position paper, submitted to SQA in response to the consultation, and outlining a set of radical proposals – including suspending exams for N5 to allow more space for the arguably more important Higher exams diet – should be seriously considered. We have found widespread support for this sort of action from teachers, head teachers and local authorities, as we took evidence for the review.

In the longer term, and beyond the remit of this review, we wish to offer some observations about the future of qualifications. There is widespread support across all of the stakeholder groups, with whom we engaged during the review, for a fundamental rethink of the long-term approach to awarding qualifications. Many spoke of the 'opportunity' presented by the current disruption. The review has found consistent support from all stakeholders (including young people and parents, for a reduced emphasis on terminal examinations as the basis for qualifications. There is widespread support for continuous assessment and its benefits (including the potential for assessments to be used in a more formative way than at present), when teaching to the final test – often in highly formulaic ways – seems to be the norm. We do not hold with a prevalent discourse which frames this debate as an either/or-ism – e.g. either exams or coursework. Exams have their place in any qualifications system, as a valid method of assessment, albeit (as is the case with other methods) with particular strengths and weaknesses. We do, however, advocate a mature debate about the future of qualifications that involves enhancing assessment literacy amongst education professionals, as well as challenging stereotypical attitudes amongst the wider population about what constitutes valid assessment. This debate needs to be balanced against the literature that points to the potential unreliability of teacher assessment and variable levels of assessment literacy amongst teachers, particularly in highly performative cultures that can encourage grade inflation (e.g. Priestley & Adey, 2010; Willis et al., 2013; DeLuca et al., 2016). This in turn raises broader questions about the governance of education systems and particularly the place of accountability mechanisms in creating perverse incentives that might distort educational decision making (e.g. see: Cowie at al., 2007; Biesta, 2010; Priestley et al., 2015).

This, in turn, sheds light on the continued viability of a ladder of qualifications approach, characterised by the 'two term dash' and a competency-based 'mastery' approach to assessment. We would argue, on the basis of the evidence from our review, that the Covid-19 crisis has stimulated some valuable debate in this area, including amongst young people and their parents, and that the time is ripe for meaningful debate about larger scale reform. We note here that many young people want the opportunity to sit exams next year and said that physical measures should be put in place to allow this to happen (i.e. socially distanced exams/perspex screens). We suggest that these discussions are taken up by the OECD review and subsequently through a national conversation.

Contact

Email: nikki.milne@gov.scot

Back to top