It's Our Future - Independent Review of Qualifications and Assessment: report

Final report of the Independent Review of Qualifications and Assessment in Scotland.


5. Assessment Information Technology and Artificial Intelligence

5.1 Assessment and Information Technology

The advances in information technology since the beginning of the 21st century have been significant, and these advances have been mirrored by advances in assessment. We have encountered a range of innovative assessment approaches as part of this Review: assessments that adapt to the responses of learners to provide more personalised assessment; real time assessment, where learners receive feedback as the work is being assessed augmented; reality where assessment is undertaken through simulation; the use of apps to have learners respond throughout a lesson and provide information on class responses. The range of innovative approaches to assessment is ever expanding and offer new ways to assess and to examine progress and achievement.

Ofqual’s (2020) Report Online and on-screen assessment in high stakes, sessional qualifications notes, “There is a broad body of research examining the potential benefits and challenges of implementing online and on-screen assessment in different contexts.” Ofqual suggest that there are three barriers currently preventing greater use of digital assessment including variation in IT provision in schools and colleges (range and access to devices/broadband speed), implementation challenges (national or local implementation/mandatory or voluntary adoption) and ensuring fair treatment of all students (managing the impact of different software and devices). While the report was written pre-pandemic and reflects the context in England, it provides a useful base from which to consider how digital assessment could be better established in Scotland where the barriers are likely to be the same.

Ofqual suggests a range of actions which may help overcome barriers in this area a number of which are relevant to the Scottish context and resonate closely with the findings of this Review. Possible actions suggested by Ofqual include:

  • Jurisdiction wide initiatives led by a sponsoring national or regional government or awarding organisation, often in collaboration – which feature: investment in school/college infrastructure and online or on-screen systems, well considered risk appetite including an acceptance that things may go wrong, and system leadership.
  • A vision that assessing on-screen or online matches wider societal changes and needs, including those of students and employers and that the anticipated benefits justify the investment and required appetite for risk.
  • Significant engagement and communication activities with key stakeholders, often inviting early adopters to play an influential part in the roll out of programmes or pilots.
  • Clear advice and support for teachers, IT support staff, examinations officers and invigilators on expectations of them prior to and on the day of the assessment.

To maintain consistency, fairness and continuity in respect of digital assessment a coordinated national approach appears to be essential. We also note Ofqual’s suggested need to communicate clearly the benefits of a change while also supporting the those responsible for learning, teaching and administration of the assessment itself. A fair national approach, clear communication of change and support for the education workforce will all be key in a move to digital assessment and indeed the broader reform of qualifications and assessment in Scotland.

5.2 Artificial Intelligence

Earlier in this report we discussed the potential of AI to disrupt current qualification practices and society more broadly. We highlighted the different reactions to the potential impact of AI on coursework. Opinions range from those expressed by the head of Ofqual, England’s chief examination regulator who argued that AI bots could lead to the end of coursework with invigilated examinations becoming more important; to the position taken by the CEO of the IB who proposed that we should learn to live with AI. The IB would not ban the use of AI or change the nature of the IB programme which includes coursework. He positioned AI as an addition to existing technologies, such as spell checkers, translation software and calculators. The task for educators, he argued, was to support learners to use AI tools effectively and ethically.

5.3 Artificial Intelligence: unsettling or transformational?

AI in the form of Large Language Models (LLM) such as ChatGPT has inspired both shock and admiration amongst those impressed by the apparent capability of the technology. Earlier in this report we outlined ways in which this new technology exemplifies the nature and speed of changes in societies across the world. AI has also raised questions about the knowledge, skills and competences learners will need to be qualified as citizens able to participate in a mid to late 21st century democracy. We also discussed the difficulty of making accurate predictions about the future and therefore, the importance of having an adaptable, flexible system for qualifications to allow timeous adaptations to be made as required.

When a potentially disruptive technology, such as the AI large language models, is first introduced it can be difficult to assess its potential impact, essentially to separate authoritative comment from views and reactions. In May 2023, the Institute of Electrical and Electronic Engineers (IEEE) published an interview with Rodney Brooks (authority on AI, Faculty member of MIT, Carnegie Mellon and Stanford University) (Zorpette, 2023). The IEEE is an authoritative voice as the world’s largest professional organisation for Engineering and Applied Sciences.

The article is entitled, “Just Calm Down about ChatGPT-4 Already”. Brooks argues that ‘all rapid and pivotal advances in technology have a way of unsettling people. He is not as fearful as others of recent developments in AI. He argues that there is a danger that we mistake performance for competence.

“We see a person do something and we know what else they can do, and we can make a judgment quickly. But or models for generalising from a performance to a competence don’t apply to AI systems.”

LLM, he suggests are better than search engines, but they have a fundamental problem. They provide answers with confidence but the answers are commonly inaccurate. He is sceptical about whether the next generations, GPT-5 or GPT-6 will make significant progress because the LLM has no underlying model of the world, it simply correlates language, for example, predicts the next word.

“What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be…..I think it’s going to be another thing that’s useful.”

5.4 AI: Truth, Fake News and 21st Literacy

How to know whether or not text is dependable is not an issue that has arisen only with ChatGTP. The rise of social media and the impact of ‘fake news’ has troubled societies for some time. Concerns about the ever-increasing challenge of how to know whether or not you can trust what you see, hear or read are now being reflected in the practices of major organisations. For example, the BBC, in May 2023 launched BBC Verify with the following justification:

“The exponential growth of manipulated and distorted video means that seeing is no longer believing. Consumers tell us they can no longer trust that the video in their news feeds is real. Which is why we at the BBC must urgently begin to show and share the work we do behind the scenes, to check and verify information and video content before it appears on our platforms. And as AI weaponises and turbocharges the impact and consequences of disinformation, this work has never been more important.”

These changes in whether or not we can trust what we see or what we read are evident in text produced by AI. Current LLM may produce text that sounds convincing but contains inaccuracies. Other technological advances raise similar problems. This is a crucial issue for the future of education. To be literate in a mid to late 21st century society, means more than being able to read and write. Being digitally literate and understanding how to check whether or not a source is dependable or a response accurate, will be fundamental skills for citizens in all democratic societies and will be a major responsibility of education systems.

5.5 AI: Implications for Qualifications and Assessment

There are implications for the curriculum in schools and colleges. Learners and teachers/lecturers, will have to learn how to use these tools, for example, how to write good prompts and to understand what LLM are good at and their limitations. There are also implications for assessment and qualifications. Different kinds of tasks will be needed in coursework. For example, a learner could be asked to generate a ChatGTP answer and the coursework task would be to check the accuracy of the response or to consider how AI generated responses might be improved. ChatGPT-4 is the AI tool that has attracted most attention but since its release, numerous others have followed that generate images or sounds, computer code and video. There is the potential to use them to have learners generate a far wider variety of forms of evidence of learning than the methods we currently use most commonly, for example, essays.

Teachers and learners have already been experimenting with ChatGPT. Some teachers have reported that lesson plans can be produced in a fraction of the time taken traditionally. Others have used ChatGPT to evaluate papers and have reported that it provided detailed and useful feedback very quickly. Websites are already beginning to emerge with lists of how to use AI to help teachers and learners, for example, Ditch that Textbook As with all resources, these need the critical, professional eye of the teacher, but they offer the potential to allow teachers to spend more time on supporting learning rather than on more bureaucratic activities.

There are also likely to be implications for the ways in which we judge achievement. For example, assessing a learner’s work might, in future, be undertaken with the learner present. Learner would be judged on their descriptions of the process and how the output was achieved rather than judging the output alone.

One of the most common challenges emerging from AI is the fear that its use will lead to cheating in coursework. However, cheating is not a new concern. Coursework, and the potential to cheat, has always been a source of concern in high stakes assessment sometimes reflected in dramatic newspaper headlines. For example, on the 14 January, 2021, the Scottish Sun ran a headline, “Qualification Fraud: Coronavirus Scotland: Lockdown ‘allows school pupils to cheat on SQA coursework’ with fears parents are doing it”. Although the text of the article is a little more balanced, the headline is stark.

During the Review, a number of respondents raised the issue of parental support with existing coursework as an issue of equity. There were perceptions that learners from socially advantaged communities were more likely to be supported at home than learners who came from less advantaged circumstances. In the current context, teachers are aware of the dangers that coursework could be undertaken by someone other than the learner. They know the learners with whom they work and the kinds of performance they would expect from each learner. They spot, for example, when coursework is not consistent with classwork produced by the learner. The same approaches that teachers use currently to ensure that coursework is authentic will help detect AI generated coursework. Some argue that, if all learners have access to AI, it might even help to level the playing field. Coursework tasks undertaken in education settings is one strategy to promote greater authenticity.

5.6 AI: A Common Approach Across Education

These issues about how best to live with AI are debates that are currently underway in colleges and universities and the strategies outlined above are ways in which they intend to tackle the current challenges of AI. It would be helpful if there were common approaches to the use of AI across all education contexts.

When technology is new, how it will ultimately be used is open to debate. Mitchel Resnick (2023), Professor of Learning Research at the Massachusetts Institute of Technology (MIT) suggests that for AI, just as with all previous technologies, for example, personal computers or the internet, decisions have to be taken about if, and how, the new technology should be integrated into the learning environment. He advocates beginning from first principles, decisions about what kind of education we want for our learners and our society. We then design the uses of the new technology to be consistent with our values and vision.

Scotland’s educational values are those inscribed on the mace in The Scottish Parliament, “wisdom, compassion, justice and integrity”. The country’s commitment to the Children’s Rights as articulated in the UNCRC, put learners at the heart of education. Every learner matters. These values sit well with Resnick’s argument that given the uncertainties of the future, all learners need opportunities, ‘to think creatively, engage empathetically, and work collaboratively, so that they can deal creatively, thoughtfully, and collectively with the challenges of a complex, fast-changing world’ (Resnick, 2023).

Resnick (2023) suggests that, at present, the future is ours to write. AI has the potential to have a negative impact on learning to constrain learner agency: AI systems could become traditional tutors, setting learner goals, giving information, asking questions and assessing performance. Or they could be designed to help learners build agency; setting their own goals and expressing their own ideas. The latter approach, he argues, would help build initiative, confidence, motivation and creativity. The skills they will need as future citizens.

Resnick (2023) argues that if the pandemic has taught us anything, we have learnt how important relationships are. AI systems can provide useful feedback but they cannot build the relationships with learners that good teachers do, getting to know learners, their motivations, their concerns. Good teaching involves knowing how to build mutually supportive communities of learners where learners feel that they belong.

AI could enhance learning, by supporting project-based, interest driven learning experiences by providing students with the understanding of how to use AI tools as a resource to support the creative learning process. Resnick (2023) argues that AI systems should be seen as a new category of educational resource. He proposes that as educationalist, we should develop a set of guiding principles to design and use AI systems ‘to engage young people from diverse backgrounds in creative, caring and collaborative learning experiences. We should:

  • support learners as they engage in design projects and navigate the creative learning spiral;
  • ensure that learners feel a sense of choice and control in the learning process, enabling them to develop their interests, their ideas, and their voices;
  • supplement and support (rather than replace) human interaction and collaboration;
  • provide opportunities for learners to iterate and refine their ideas and their creations and;
  • take into account the different needs, interests, and aspirations of learners from diverse backgrounds, especially those from marginalised and vulnerable communities.

However, at present AI is insufficiently regulated and future security will depend on governments internationally acting now to build in safeguards for future generations of AI.

Recommendation 12: Establish a cross sector commission on education on Artificial Intelligence.

  • As a matter of urgency, Scottish Government should convene and lead a cross sector commission to develop a shared value position on the future of AI in education and a set of guiding principles for the use of AI.
  • The use of AI LLM, such as ChatGPT, should not be banned but learners and teachers/lecturers must be supported to make best use of them. AI offers the potential to reduce administrative burdens and to lessen the time taken for other teaching tasks. All opportunities to do that should be taken.
  • Coursework should remain an integral part of qualifications but existing tasks should be reviewed to ensure that they are compatible with the new context created by recent developments in AI.

Contact

Email: qualficationsreform@gov.scot

Back to top