Scottish Household Survey: response rates, reissuing and survey quality

This paper assesses the impact of reissuing on survey estimates using data from the Scottish Household Survey, 2014 and 2016.

This document is part of a collection


3 Approach to analysis

3.1 This is a relative non-response bias study, estimating the impact of a change in the response rate rather than assessing the overall level of non-response bias.[2] At the core of the analysis is the question, 'What impact does reissuing have on the survey estimates?'

3.2 The analysis compares estimates from the weighted full survey sample with estimates from first issue interviews only. It is important to note that the estimates from the first issue interviews were weighted as if they were the final achieved sample.[3] This analysis is, in effect, showing how the published results of the survey would differ if reissuing had not been used, and the fieldwork had been completed with lower response rates.

Figure 3.1: Overview of the two types of estimate and how they correspond to the reissuing strategies and response rate.
Data estimate based on Reissuing strategy Response rate 2014 Response rate 2016
Fully achieved sample (same as current published estimates) Reissue almost all of what can be (current approach) 67.0% 64.2%
First issue respondents only (Issue 1 estimates) No reissues 56.1% 53.8%

3.3 Overall, twelve key survey measures for each of the surveys were selected for analysis at the national level. These are detailed in Table 3.1 along with the sample sizes. These include some of the headline measures as well as measures asked of a subset of the survey.

3.4 Additionally, estimates for the 11 random adult measures were analysed by key sub-groups: gender, age, rurality, deprivation, tenure, area and household type.

Table 3.1: Key survey measures included in the analysis
2014 base[4] 2016 base Notes
National level estimates
% satisfied (very or fairly) with local public services 9,746 9,594 Adults
% agree (strongly or slightly) they 'can influence decisions affecting my local area' 9,798 9,642 Adults
% using the internet for personal use 4,787 4707 Adults (asked of a sub-set)
% rate neighbourhood as a very good place to live 9,798 9642 Adults
% participated in a cultural activity or attended a cultural place or event in the last 12 months 5,140 5,008 Adults (asked of a sub-set)
% that make one or more visits to the outdoors per week 9,798 9,642 Adults
% live within 5 min walk of greenspace 9,798 9,642 Adults
% provided unpaid help to organisations or groups within last 12 months 9,798 9,642 Adults
% participation in physical activity or sport in last four weeks 9,798 9,642 Adults
% rate general health as bad or very bad 9,798 9,642 Adults
% experienced either discrimination or harassment 9,798 9,642 Adults
% households not managing well financially 10,632 10,470 Households

3.5 Impact was measured in two ways. Firstly, through the absolute percentage point difference between the final sample estimate and the first issue only sample estimate. The absolute difference gives a good indicator of overall impact on each estimate.

3.6 However, using the absolute difference alone does not give a fair test of the impact of re-issuing as (everything else being equal) we would expect the size of the difference to be largest for estimates around 50% and to decrease as the estimate moves away from 50%. The absolute difference also takes no account of the sample size. Additionally, traditional tests for significance such as a chi squared test or formal hypothesis testing were not appropriate, since the samples are not independent (subsamples of the full sample are compared to the full sample). Alternative tests could be used, but the impact of re-issuing would have to be extreme for a difference to be significant; so they are not very discriminating.

3.7 In order to compare the magnitude of differences across estimates, it was necessary to standardise these in some way. This has been done in different ways in the past. For example, for their assessment of the impact of a lower response rate on the Crime Survey of England and Wales, Williams and Hocekova (2015) converted 'effect sizes' into t-scores.

3.8 Impacts were standardised by calculating the ratio of the absolute difference between the estimate to the standard error of the main estimate. This method of standardising is equivalent to the Bias Ratio method described in Sarndal et al (1993).

3.9 We favoured standardising impacts in this way as the size can be intuitively compared to sampling error. A value of one for this measure means that the difference between the estimates is equal to one standard error of the main estimate.

3.10 Standard errors and confidence intervals were adjusted to take account of the published guidance on assumptions around the expected survey design factors in the SHS and SCJS. The analysis used a design factor assumption of 1.2. The standard errors given throughout this report are after adjustment for the design factor and therefore based on the net effective sample size of the estimates[5] and do not need further adjustment to calculate the confidence intervals.

Contact

Email: shs@gov.scot

Back to top