Literature Review on the Impact of Digital Technology on Learning and Teaching

This literature review was commissioned by the Scottish Government to explore how the use of digital technology for learning and teaching can support teachers, parents, children and young people in improving outcomes and achieving our ambitions for education in Scotland


Method

A research protocol was developed, setting out inclusion criteria for a literature search, a search strategy and search terms. This can be found in Annex 1.

Results of the literature search

The initial searches identified over 600 items, along with over 350 additional items from the Scottish Government Library Service's lists and website searches. After a review of abstracts to determine relevance, the list was reduced to 217 items for detailed review. This took account of the subject matter and evidence of empirical research measuring outputs and outcomes of digital technologies in learning and teaching. It did not take account of the methods used in the research. Table 1 below shows the number of items included in the detailed review, by the broad thematic areas of the study.

Table 1 Profile of studies selected for detailed review.

Thematic Area No of Studies
Raising attainment 100
Reducing inequalities between children 48
Improving transitions into employment 15
Improving the efficiency of the education system 45
Enhancing parental engagement 9
Total 217

Even with additional searches to identify more studies on 'improving transitions' and 'enhancing parental engagement', the imbalance between thematic areas could not be reduced. Of the 200 or so studies identified, little more than 60 provide evidence of relevance to this report. A bibliography of these items can be found in Annex 3.

To guide the analysis and assessment of the material, the study took a structured approach to reviewing the quality of evidence in the literature. An assessment framework (in the form of a logic model) was developed to help identify the key evidence of the relationships between digital learning and the outcomes being measured, and of what works to achieve the expected outputs and outcomes being sought from the literature review.

Table 2 below sets out the approach to assessing the quality of empirical evidence found in the literature. Assessment is based on:

  • Experience of evaluating policy actions in education and training. Randomised control trials and empirical studies establishing an appropriate comparative situation where a policy measure has not been implemented are more likely to provide robust assessments of the relationship between any policy measure and the outcomes measured than a small set of qualitative interviews with the delivery agents;
  • The criteria used in the What Works evidence reviews[3] and the Scientific Maryland Scale (SMS). These were used as a measure for assessing the quality of research, and to give weight to some research over other research. Because relatively little of the literature about digital technologies in learning and teaching would be scored at level 1 or 2 on the SMS, a less stringent approach is necessary. Consistent results from qualitative and small-scale mixed method studies in different contexts can provide evidence of relationships when better quality research is unavailable.

Table 2 Strength of evidence demonstrating a causal effect

Type of study Strength of evidence
Studies which have drawn conclusions from meta-reviews of robust evaluations +++++
Evaluation studies with counterfactual quantitative evidence of a significant effect ++++
Studies which measure change before and after the policy action, controlling for other factors and which have large samples for robust statistical analysis +++
Research studies which are based on sufficiently in-depth case studies and a sample of qualitative interviews to allow robust qualitative assessments ++
Small scale studies dependent on qualitative data which has not been collected systematically or on a sufficiently large scale +

The scale in Table 2 is used in this review to suggest that higher level studies (four and five stars) provide conclusive evidence, while middle level studies (three stars) provide indicative evidence and lower levels studies (one and two stars) provide promising evidence. Account also needs to be taken of the contexts, volumes and scales of studies in reaching these conclusions.

Annex 2 sets out the assessment framework devised for this literature review. In the form of a logic model for digital learning and teaching measures, this helps to:

  • Clarify the evidence of relationships between the learning and teaching activities and the expected outputs, outcomes and impacts;
  • Show the linkages we might expect to find evidence of, between the digital learning and teaching activities and the outputs, outcomes and impacts for different beneficiaries (learners, parents, teachers, and the school);
  • Separate the outcomes which might be considered to be immediate, medium term and longer term.

It is anticipated that digital learning and teaching activities can be more closely related to the immediate and medium term outcomes than the longer term outcomes. Longer term outcomes would be expected to be achieved through a variety of measures, which could include digital learning and teaching activities.

The quality of the literature

The research literature is extensive, even when material about older technologies which are no longer relevant, learners over the age of 18, and largely descriptive (i.e. not empirical or analytical) studies are eliminated. Much of the research, however, is focused on relatively small scale applications of digital technology. Many of these studies use qualitative data from teachers and learners to describe short term outcomes, not testing changes in knowledge or skills over time or comparing subjects to similar groups of learners who have not used digital applications. However, there are studies - particularly on the use of digital learning to increase attainment in specific subject areas - which measure knowledge and skills acquired and compare learners who have used digital applications to learners who have not.

In the main, while a few studies have identified statistical relationships between ICT usage and longer term outcomes - such as attainment in examinations and tests in secondary education - there are no longitudinal studies which show relationships between digital learning and the longer term outcomes set out in Annex 2.

There are a few meta-reviews and meta-analyses of the literature that have examined the conclusions reached by a large body of similar research on digital learning and teaching in specific contexts. There are also some reviews of similar studies/technologies that have examined the methods used to discern the strength of evidence they collectively provide. These together can provide stronger evidence than individual studies of both the outcomes achieved and of how far digital learning and teaching make a difference.

Substantial published meta-analyses for this review are set out in brief in Table 3 below, with further details in Annex 4.

Table 3: Summary of Meta-Analysis Literature Reviews Included in the Review

Authors Date Title Scale covered
Li, Q. and
Ma, X.
2010 A Meta-Analysis of the Effects of Computer Technology on School Students' Mathematics Learning

46 primary studies covering 37,000 learners.

Higgins, S.,
Xiao, Z., and
Katsipataki, M.
2012 The Impact of Digital Technology on Learning: A Summary for the Education Endowment Foundation 48 studies synthesizing primary research
Liao, Y-k C.,
Chang, H-w., and
Chen, Y-w.
2008 Effects of Computer Application on Elementary School Students' Achievement: A Meta-Analysis of Students in Taiwan 48 studies covering over 5000 learners
Archer, K., Savage, R.,
et al.
2014 Examining the effectiveness of technology use in classrooms:
A tertiary meta-analysis
38 primary studies
Cheung, A., and
Slavin, R.
2012 Effects of Educational Technology Applications on Reading Outcomes for Struggling Readers: A Best Evidence Synthesis 20 studies based covering 7,000 learners

Meta-analyses use a standard measure - effect size - in order to compare studies based on different sample sizes and different measurements of change [4]. Effect sizes in educational experiments which are greater than around 0.4 are considered to be effective, and if achieved over a sustained period should influence learners' longer term attainment (such as grades achieved in examinations).

In using these studies, it is important to be mindful that:

  • The outcomes they measure can arise from factors other than the use of digital technology. This can be controlled for where the outcomes can be compared to learners who have not used digital learning;
  • The scale of outcome can be influenced by the quality and effectiveness of the implementation of the digital learning by the teacher, and the quality of the teaching;
  • The scale of difference can increase with the length of time the digital learning has been used;
  • The studies do not focus on how the outcomes are achieved. This is generally more apparent from examining a wide range of smaller scale evaluative studies;
  • The studies are most commonly of learners in the US and East Asia. These are all OECD countries with similar ambitions for learners and their progression to higher education and employment, as well as similar curriculums.
  • In addition, while many studies clearly focus on specific learners in terms of age, settings (primary, secondary, special education) and domestic circumstances, none make any comparisons between the impact of digital technologies on educational priorities for different age groups. As a consequence, it has not been possible to identify any differences in the use and impact of digital technology in primary and secondary school settings. It has only been possible to identify that the use of digital technologies was beneficial to learners in primary and/or secondary school settings.

Contact

Email: Catriona Rooke

Back to top