1.1 What is SNSA?
In January 2016, the Scottish Government published The National Improvement Framework for Scottish Education (hereafter ‘the Framework’). The Framework set out the Scottish Government’s vision and priorities for Scotland’s children and young people. It was developed to support high-quality learning and teaching – the core principle of Curriculum for Excellence (CfE). Over time, it was intended that the Framework would provide a level of robust, consistent and transparent data across Scotland to extend the understanding of what works, and drive improvements across all parts of the system.
To meet the aims of supporting high-quality learning and teaching for Scottish children and young people, it was determined that gathering data on children’s progress at key points in their education, including data on differences between those from the least and most deprived areas, was essential. Improved data of this kind would support the planning of further interventions to ensure that all learners achieve as well as they can. Part of this information would be provided by the SNSA.
The assessments have been available for use in publicly funded schools in Scotland since August 2017. They are administered to children and young people in Primary 1, Primary 4, Primary 7 and Secondary 3 (P1, P4, P7 and S3) across Scotland, once in each school year at any point in time. Reports to schools and teachers are provided as soon as a learner completes an assessment. Additional reports are available for local authorities. This national report presents a description of what SNSA sets out to measure, and some findings from the first year of the programme.
Results from Scottish National Standardised Assessments provide one source of evidence as part of a range of evidence to support teachers’ professional judgement of children’s and young people’s progress in learning. ACER’s approach to learning assessments is that they should assist in:
- clarifying starting points for action
- investigating details of student learning and performance
- monitoring improvements and evaluating educational interventions
- motivating effort and encouraging self-monitoring
- providing feedback to guide future action.
The user reports provided for the SNSA support a number of these points by providing teachers, school leaders and local authorities with diagnostic information about learners’ strengths and areas of challenge that can be used to plan next steps in learning. Alongside other assessment evidence, the information reported in SNSA can also be used to inform teachers’ professional judgement on achievement of CfE levels. A central aim of SNSA is also to provide information on the outcomes of Scottish children and young people in literacy and numeracy over time.
1.2 Key features of SNSA
The SNSA programme has a range of important and innovative features:
- it is delivered online
Children and young people present for the assessments using a digital device: a desktop computer, laptop or tablet. The assessments are delivered online, and because all items (questions) are automatically scored, teachers can access their learners’ reports as soon as an assessment is completed.
SNSA are designed to be administered on a range of devices, including desktop PCs, laptops and tablets, and delivery on the most commonly available browsers is supported. This flexibility in mode of delivery is designed to support administration of SNSA across a range of different classroom settings and enables schools to choose the method of presentation that best suits them. An online tool to assess technical readiness is available. This tool can be applied in advance of the device being used for the assessments, to ensure that the assessments function as expected. With this flexibility of delivery, the content of the assessments, within the adaptive design model, remains consistent.
- it is adaptive
The questions presented to children and young people vary according to how well they are performing on the questions they have answered so far. All learners begin an assessment with a set of questions of middle-level difficulty. If a learner does well on these, the next set of questions presented will be more challenging. If a learner is not succeeding on early questions, the items become easier – and so on, through the assessment. The adaptive nature of SNSA means that the experience for each learner is modified so that the assessment is neither too hard nor too easy but appropriate for their level of capacity. The adaptive design also means that the diagnostic value of the assessment is optimised. An assessment is most useful as a formative tool when there are no ‘floor’ or ‘ceiling’ effects. A ‘floor effect’ occurs when an assessment is too hard, so it tells only what a learner cannot do. If this happens, it is impossible to see a starting point on which future learning can build. A ‘ceiling effect’ occurs if an assessment is too easy and a learner gets every question right. When this happens, it is impossible to judge the upper reach of their attainment and thus to help this learner to go the next step. The adaptive design, when working well, enhances the learner’s experience of the assessment and serves optimally in establishing where children and young people are in their learning development.
- it has a carefully judged number of questions per assessment
Each assessment has from 30 to 36 scored items, with the number of questions increasing from Primary 1 to Secondary 3. These numbers of questions allow coverage of different aspects of each of the assessed subject areas, without excessive time being required by any learner. On average, in the 2017 to 2018 academic year, children and young people completed each of the assessments within 30 to 40 minutes (less than 30 minutes for Primary 1 children). However, there is no time limit for completing SNSA, and where a teacher judges it necessary, a child or young person may take a break and come back to pick up the assessment where he or she left off.
Each question in the assessments has been empirically tested to make sure it ‘works’. Before being included in SNSA, every question has been presented to several hundred learners of a similar age and stage to the ultimate respondents, to ensure that it has sound measurement characteristics and will yield statistically consistent results. In addition, every question has been reviewed and signed off by a panel of experts from within Education Scotland.
- responses are objectively scored
The majority of questions in SNSA are in ‘selected response’ format, mostly multiple choice. The advantages of this format are both educational and technical. First and foremost, an advantage of this format is that all responses are marked consistently, so there is no question about the reliability and standardisation of the results at the question level. A second advantage, in terms of curriculum, is that because learners can complete questions relatively quickly, a wider range of curriculum content can be covered in a limited time than would be possible if children have to generate their responses. A third advantage is that the assessments can be instantly marked, allowing the allocation of assessment items of an appropriate difficulty level within the adaptive design. Additionally, reports can be accessed as soon as an assessment is completed, so teachers can use the formative feedback immediately. There is, of course, much to be learnt about children’s understanding and skills from other modes of assessment, from short written responses to essays or projects and performances. However, assessments using selected response formats serve the purposes of SNSA well in its role as one element in the wider array of assessments that teachers will use to evaluate children’s and young people’s learning.
The exception for the academic year 2017 to 2018, from the typical multiple-choice assessment format, was the relatively small number of items assessing spelling that required a constructed response in the writing assessments for P4, P7 and S3. These items included a text input box, where learners typed their spelling of a specific word. This item type allowed a quick response from learners and was automatically marked, so carried the same benefits as the closed-response items.
Other features of the SNSA programme are specific to the Scottish education context.
- it covers agreed elements of Curriculum for Excellence
The assessments have been constructed to align with CfE. A design for each assessment covering organisers and learning statements defined in the Benchmarks: Literacy and English and Benchmarks: Numeracy and Mathematics (Drafts, August 2016) was agreed with Scottish Government and Education Scotland before the assessments were built. It should be noted that for the academic year 2018 to 2019, the final version of the Benchmarks (published in June 2017) is used as the reference point for the assessments. The content areas covered are described in more detail in the sections of this report dedicated to numeracy, reading and writing.
- it has a flexible delivery model
The flexible delivery model is intended to allow children and young people to be assessed at any time in the school year that is judged suitable for the school, class and individual learner. A consequence of the flexible timing is that, when interpreting the results of the assessment at individual, class, school, local authority or national level, the point in the school year that the assessment was taken needs to be taken into account.
There is clear evidence from the norming studies conducted during the academic year 2017 to 2018, in November and March, and from the whole year’s attainment levels per stage, that children’s and young people’s capacities – their literacy and numeracy skills, knowledge and understanding – develop progressively, on average, over the 10 (effective) months of an academic year. Amongst the year groups presenting for SNSA, children in Primary 1 showed a marked increase in capacity in both literacy and numeracy: this can be seen when comparing results from 2017 (August to December) with those from 2018 (January onwards). The same pattern was observed for P4, P7 and S3, across all subject areas, but with diminishing increases in performance in 2018 for each successive year group. Within each year group, the rate of improvement between the first half and second half of the 2017 to 2018 academic year was similar, regardless of subject area. The only exception to this general pattern of improvement from 2017 to 2018 was for Secondary 3 reading, where the overall result was the same.
While the findings described above might be as expected, they also constitute a positive result, confirmed empirically with SNSA data. However, given the possibility of administering SNSA throughout the school year, results from all learners should be interpreted with some caution when making any comparative judgements about individuals or groups. Each learner presented only once, and, because the timing of the SNSA was locally determined (except for the norming studies), it cannot be assumed that the profile of children and young people who presented in the first half of the school year was the same as that of those who presented in the second half. For example, it is possible that teachers chose which learners should sit the assessment based on their judgement of their learning progress.
- it is designed to be accessible to all learners
The system is designed to be compatible with a range of assistive devices, so that learners can use the devices that they are familiar with from their everyday use in the classroom to support them in completing the assessments, including software and devices such as text readers, screen readers and switches. In the case of screen readers, the assessments have been developed to include alternative text descriptions of images, charts and graphs that are integral to answering a question. Detailed guidance is available for teachers in relation to additional support needs (ASN) and English as an additional language (EAL). The information gathered from across the school year on which the analysis within this national report is based includes data from learners with ASN and EAL.
1.3 Reporting SNSA results
In the academic year 2017 to 2018, the terms ‘high’, ‘medium’ and ‘low’ were used in reporting the results of SNSA to schools and local authorities, and they are also used in this report.
1.3.1 Reporting on learner capacity
The reports available to schools and local authorities for SNSA 2017 to 2018 provided diagnostic information about each question presented to an individual or group of children or young people. This diagnostic information showed, for each question, which organiser the question belonged to, the skills, knowledge and understanding assessed and the question’s difficulty, as well as the individual’s or group’s results on the question. This diagnostic information provides one piece of evidence to help the education profession identify areas of strength or challenge at the individual learner level or for groups.
Another key feature of the reports for schools and local authorities in the first year of SNSA, the 2017 to 2018 academic year, was information about learners’ overall results. Each year group’s capacity was reported in three broad regions: high, medium and low. The capacity of learners who achieved only a small degree of success on the assessment was labelled low. Similarly, the capacity of learners who achieved a substantial degree of success on the assessment was labelled as high. These broad overall capacity regions were related to regions of learner capacity on the assessment that were specific for each subject area and year group, and each capacity region for each of the eleven SNSA had a corresponding description unique to that assessment. These descriptions were based on a summary of the skills, knowledge and understanding assessed in the questions included in this first assessment in the academic year 2017 to 2018, which, in turn, were aligned with Benchmarks. The region descriptions for each assessment and stage are shown in Appendix 6: Region descriptions from the 2017 to 2018 individual reports.
The location of a learner’s capacity indicated that he or she was twice as likely as not to succeed on the questions in the assessment addressing the skills, knowledge and understanding in the description for that region. The position locating the learner’s capacity against these descriptions, on their individual reports, showed the kinds of skills, knowledge and understanding he or she demonstrated in the particular assessment.
The terms high, medium and low have a specific and different meaning for each of the assessments, according to subject area and year group. Accordingly, the dot on an individual’s report, locating the learner’s capacity, shows what kinds of skills, knowledge and understanding he or she demonstrated in the particular assessment, rather than any fixed judgement about the learner’s aptitude.
It is important to note that, because each of the capacity regions for 2017 to 2018 is specific to a P1, P4, P7 or S3 assessment, regions are not comparable across year groups. Therefore, differences in results across year groups do not reflect growth in capacity. For the 2018 to 2019 academic year, the newly established bands corresponding to the SNSA long scale will allow comparisons across year groups in terms of proportions of learners with capacity at each band.
In reporting for the 2017 to 2018 academic year, a large proportion of children and young people showed capacity in the region labelled high. In subsequent school years, results will be described with reference to a series of bands along the SNSA long scale for each subject – six overlapping bands per year group. During the course of the 2018 to 2019 school year, the 2017 to 2018 results will be transposed onto the long scale and made available to schools and local authorities.
The terms high, medium and low, in relation to learner capacity, are used with the meaning described here throughout this report.
The results on the assessment of an individual, a class or a school are intended as one piece of evidence – a fair and objective piece of evidence – in an evaluation of learners’ capacities. The holistic result on the assessment is intended to be used by teachers to corroborate or, sometimes, to raise questions about, other reference points in their overall assessment of a learner’s capacity.
Figure 1 shows an example extract from an individual report (for a fictitious learner) for the Primary 7 Reading assessment for 2017 to 2018. To the left is a scale labelled high, medium and low, accompanied by the region description text, referred to above, for each of the broad regions. The easiest content is summarised in the paragraph at the bottom, and the most difficult summarised in the paragraph at the top.
Figure 1: Example page from an Individual Report
1.3.2 Reporting on question difficulty
Just as each learner’s overall capacity was expressed as high, medium or low, each question in the assessment was also categorised as being of high, medium or low difficulty. A question categorised as having low difficulty in SNSA was one that learners of this age and stage generally tended to be more likely to answer correctly. A question categorised as medium in difficulty was one that fewer learners were able to answer correctly. A question categorised as high was one that relatively few of the learners were able to answer correctly. The ratings of question difficulty appeared next to a brief description of each question on reports to schools to assist teachers diagnostically in interpreting the challenge of the questions presented to the individual child or young person.
More detail about the content of the assessments is provided in the subject area sections of this report.
1.4 The first year of SNSA
The 2017 to 2018 academic year was the first year of implementation for SNSA, and 579,879 assessments were completed across Scotland over the course of the year. This number is equivalent to about 95% of the possible maximum number of assessments available for children and young people in P1, P4, P7 and S3.
It should be noted that two circumstances which applied to the implementation of SNSA in 2017 to 2018 have shaped the scope of this report. Firstly, while a primary aim of the programme is to provide information on progress in learning, it is not yet possible to report on progress insofar as it entails a comparison. This year’s outcomes will serve as a baseline in judging progress, from 2017 to 2018 onwards, at school and larger group levels, and will eventually provide teachers with information on the progress of individuals as they move from Primary 1 through the years of schooling.
Secondly, the metric on which SNSA was reported in the 2017 to 2018 academic year was not standardised for the Scottish population. A broad categorisation of attainment as high, medium or low was used in this first year, based on data from international contexts. A more refined scale for each subject area has now been developed, drawing on the results from two representative samples of Scottish children and young people presenting for SNSA, assessed in the first and second halves of the 2017 to 2018 academic year; as well as data from an equating study in the year groups P2, P3, P5, P6, S1 and S2, the year groups in between those that form part of SNSA (P1, P4, P7 and S3). These new ‘SNSA long scales’ are being used for reporting from August 2018 onwards.
The Scottish Government’s policy and practice of continuous improvement applies not only to educational attainment but also to the SNSA programme itself. Enhancements to content, reporting, the system delivering the assessments and the professional training courses accompanying the assessments were introduced during the 2017 to 2018 academic year, and further improvements will be implemented over the coming years.