Scottish Household Survey: response rates, reissuing and survey quality

This paper assesses the impact of reissuing on survey estimates using data from the Scottish Household Survey, 2014 and 2016.

This document is part of a collection


2 Summary of previous literature on non-response bias

2.1 Traditionally, response rates have been used as a key proxy measure of survey quality - with a high response rate indicating good quality. However, empirical studies suggest that response rates are not a good measure of survey error or bias and their use as such (although widespread) is problematic (Biemer et 2017).

2.2 Overall, research concerning non-response bias generally agrees on the demographics of those who respond less frequently to surveys. They tend to be young, single, and in employment (Luiten, 2013; Foster, 1998; Lynn and Clark, 2002; Hall et al, 2011). This is mainly because these types of people are harder to contact.

2.3 However, much of the literature finds a very weak link between response rates and non-response bias (Sturgis et al, 2016; Teitler, Reichman and Sprachman, 2003; Keeter, Miller, Groves and Presser, 2000; Merkle and Edelman, 2002; Curtin, Presser and Singer, 2000; Groves, 2006; Lynn papers as cited in D'Souza et al 2016). This is partly because good weighting strategies help to correct for patterns of differential response.

2.4 Empirical studies of non-response fall into two types, absolute non-response studies and relative non-response studies. Absolute non-response studies compare survey estimates to good estimates of a "true" value of a variable, normally from the Census to look at total non-response bias. Relative non-response bias studies assess how survey estimates change with increasing fieldwork effort (e.g. number of contact attempts, extent of reissuing) and therefore changes in target response rates. There are two key academic meta-analysis studies:

  • Groves and Peytcheva (2008) conducted a meta-analysis of absolute non-response in 59 studies (covering 959 estimates). While they found examples of large non-response bias existing, they also found that there was a very low correlation between non-response bias and response rates, and greater variation within studies than between them. They argue for the importance of finding theories that link unit non-response to non-response bias and make a distinction between missing respondents that don't introduce bias and those that do.
  • Sturgis et al (2016) examined relative non-response bias and fieldwork effort in 541 non-demographic variables in six surveys. They conclude that "response rate appears to have only a weak association with non-response bias".

2.5 As well as these major meta-analysis studies, there are a number of individual studies that provide useful contextual information:

  • In 2015, ONS undertook analysis of the impact of a lower response rate on the Crime Survey of England and Wales. They concluded "This analysis suggests that the impact of a lower response rate on the key CSEW estimates will be tiny and may be zero for some sub-groups. If the response rate is lowered by eight percentage points […] the largest impact on any point estimate would be expected to be approximately 0.3 percentage points. Some sub-group impacts might be larger than this but that would be due to the larger level of random sampling error that affects these estimates rather than any additional systematic impact."
  • The technical reports for SCJS 2014/15 and 2016/17 included analyses to consider the impact of a significant drop in response rate on key survey estimates. The analysis considered the average absolute difference (AAD) in response estimates for selected variables (including the prevalence of being a victim of vandalism, assault crime and of personal crime) between the overall final sample compared with the first issue sample. The 16/17 report concluded that a lower response rate "has a relatively marginal impact on key survey estimates".
  • Two unpublished studies examining relative non-response in the SHS have been undertaken as Q-step summer placement projects, with input from both Ipsos MORI and the Scottish Government. These studies have informed the analysis of the 2014 and 2016 waves of the SHS presented in this paper.
  • A similar study examining the impact of reissuing on estimates in the Scottish Crime and Justice Survey (SCJS) has been undertaken.

2.6 Relative non-response bias studies have suggested that, while on average the impact is relatively small, that some types of variable appear more susceptible to bias than others, such as attitudes and behaviours linked to civic engagement. D'Souza et al (2017) found that reissuing unproductive cases did reduce non-response bias for estimates for rates of volunteering and community oriented activities although they questioned how far reissuing was a cost-effective way of reducing non-response bias. However, it should be clearly emphasised that bias occurs at an estimate level rather than at a survey level.

Contact

Email: shs@gov.scot

Back to top