9 Bias and Data Quality
- Bias arises in every sample survey.
- Some bias comes from survey design (such as the sampling frame or who is deemed eligible for interview). Bias also reflects aspects of fieldwork outcomes e.g. whether potential respondents can be found at home at times when interviewers call and whether they are able to participate i.e. not restricted by ill health, disability or communication barriers; and the willingness of members of the public to participate in the survey.
- Traditionally, response rates have been used as a proxy measure of survey quality, with a high response rate and a large sample ensuring accurate estimates.
- However, the response rate is not a measure of survey error or bias and its use as such (although widespread) is inappropriate. If non-response to the survey is not spread evenly, either geographically or between sub-groups of the population, the resulting bias will limit the accuracy of the survey's estimates.
- The weighting strategy employed by the survey is intended to minimise the extent of bias.
- Comparisons with Scotland's Census 2011 and the mid-year population estimates show that the weighted SHS sample appears to be generally robust.
- The survey weighting reduces the difference between the unweighted SHS survey results and the Census 2011 estimates, though differences do still remain.
The issue of bias arises in every sample survey. There are a number of sources of bias, some of which reflect aspects of the survey design (such as the sampling frame or who is deemed eligible for interview). However, bias is also a reflection of those aspects of fieldwork outcomes mentioned previously:
- the quality of survey administration procedures;
- whether potential respondents can be found at home at times when interviewers call;
- whether they are able to participate i.e. not restricted by ill health, disability or communication barriers; and
- the willingness of members of the public to participate in the survey.
A high response rate is generally viewed as one of the key measures of data quality and, all other things being equal, a high response rate and a large sample should ensure accurate estimates. However, if non-response to the survey is not spread evenly, either geographically or between sub-groups of the population, the resulting bias will limit the accuracy of the survey's estimates.
The weighting strategy employed by the survey (described in section 7) is intended to minimise the extent of bias. The issue of residual bias is considered by comparing key results from the SHS with comparator data. The publication of the 2011 Census is the most accurate source of population data which is used by National Records of Scotland (with other sources of data on migration) to produce mid-year population estimates. While the 2011 Census figures are six years behind the 2017 SHS data, they ought to be comparable as changes in the distribution of age and household types are relatively small year to year.
9.2 Comparisons with Scotland's Census 2011 and mid‑2016 Population Estimates
Comparisons with Scotland's Census 2011 and the mid-year population estimates show that the weighted SHS sample appears to be generally robust in terms of variables associated with accommodation/property characteristics. Table 9.1: Comparison of tenure of household between Census 2011 and SHS 2017 estimates shows that outright ownership appears to be over-represented whilst social rented accommodation is under-represented. The survey weighting reduces the difference between the unweighted SHS survey results and the Census 2011 estimates, though differences do still remain.
This may reflect the difficulties in obtaining interviews with particular sub-groups of the population.
Table 9.1: Comparison of tenure of household between Census 2011 and SHS 2017 estimates
|Households||Census 2011||SHS 2017 unweighted||SHS 2017 weighted|
|Buying with help of loan/mortgage1||34.2||27.2||29.2|
|Council (Local Authority)||13.2||13.0||13.9|
|Other social rented||11.1||9.0||9.0|
1 includes shared ownership (part owned and part rented);
2 includes living rent free
When a single adult is randomly selected within households, the unweighted sample of adults always under-represents those living in multi-adult households, since each has a smaller chance of selection for interview. Table 9.1 shows the differences in the unweighted sample and how the weighting has reduced the differences from other estimates. For instance, the unweighted SHS sample contained only 7.5 per cent of adults aged 16 to 24 and the weighting increases this proportion to 13.5 per cent - much closer to both the 2011 Census and the mid-2016 population estimates. The result is that the age/sex distribution of the weighted sample is much closer aligned to the 2011 Census and the mid-2016 population estimates.
Table 9.2: Comparison of age of adults between Census 2011, mid-2016 population estimates and SHS 2017 estimates
|Adults||Census 2011||Mid-2016 population estimates||SHS 2017 unweighted||SHS 2017 weighted|
9.3 Total survey error
Traditionally, response rates have been used as a proxy measure of survey quality - with a high response rate indicating good quality. However, the response rate is not a measure of survey error or bias and its use as such (although widespread) is inappropriate. The response rate in a survey indicates the risk of non-response bias in one or more of the variables measured, not that it is actually present in any of them.
Contrary to previous belief, a high response rate does not necessarily create a quality, unbiased survey sample. Indeed, there is growing, strong evidence that the use of reissues to maintain response rates has a very marginal impact on improving data quality. This has been seen in several academic and industry studies over the past decade across a range of surveys. For example, D'Souza et al (2017) state much of the literature finds a weak link between response rate and non-response bias.
Analysis has been conducted, in partnership with Ipsos MORI, to assess the impact of the reissues on SHS data showed that, after weighting, only a very small number of measures were changed by reissuing and that the scale of the change was small. However, a first issue response rate would have more of an impact on SHCS estimates, with some sub-group differences greater than two percentage points: It concluded that reissuing has a minimal impact on reducing non-response bias in the Scottish Household Survey. While there are differences between households and people who respond at the first issue, these differences do not make a substantial impact on the results of the survey. Sub-national analysis was not considered in this paper. It is expected that there would be a greater impact of lower response rates at the Local Authority and other sub-national geographies.
It should be noted, however, that the SHS covers a wide range of issues and patterns of non-response bias are likely to differ across different measures.
When assessing survey quality, the Total Survey Error framework should not be overlooked. There is a tendency for some sources of error to be overlooked more than other types. The TSE approach methodically identifies all possible errors which can arise at each stage of the survey process. In so doing the survey process is divided into two main strands, a representation strand and a measurement strand. The relationship between survey process and error type is shown in Figure 9.1.
Figure 9.1: The lifestyle of a survey from a quality perspective
The less quantifiable the type of error, the more likely it is to be overlooked. Groves, one of the authors of the Total Survey Error framework described this as "the tyranny of the easily measurable". This has tended to mean an over emphasis on errors around representation in Figure 9.1 at the expense of the consideration of validity and measurement error.
There is a problem
Thanks for your feedback