Health and Care Experience Survey 2019/2020: technical report

Report on the technical aspects of the survey, including development, implementation, analysis and reporting.

This document is part of a collection

8. Analysis and Reporting

The survey data collected and coded by Quality Health Ltd were securely transferred to Public Health Scotland, where the information was analysed using the statistical software package SPSS version 24.0.

Reporting the Sex and Gender of Respondents

Analysis of survey response rates by sex was undertaken using the sex of people in the sample according to their CHI record at the time of data extraction (20 August 2019). This source was also used in the calculation of the survey weights (more information about this is provided later in this section).

For all other analyses by gender, the respondents’ answer to question 37 “What best describes your gender?” has been used. In total, 158,505 responders (99 per cent) provided a valid response to question 37.

Reporting the Age of Respondents

Respondent date of birth was taken from their CHI record at the time of data extraction (20 August 2019). This source was used for all stages of the analysis. The age of respondents used for reporting purposes was calculated as at 5 September 2019, the date when the sampling procedure commenced.

Number of Responses Analysed

The number of responses that have been analysed for each question is often lower than the total number of survey responses received. This is because not all of the questionnaires that were returned could be included in the calculation of results for every individual question. In each case this was for one of the following reasons:

  • The specific question did not apply to the respondent and so they did not answer it. For example if they did not use Out of Hours services in the previous 12 months and therefore did not answer questions about their experience of it.
  • The respondent did not answer the question for another reason (e.g. refused). People were advised that if they did not want to answer a specific question they should leave it blank.
  • The respondent answered that they did not know or could not remember the answer to a particular question.
  • Responses may be removed following validation checks, for example if a respondent selected an invalid combination of responses. Improved validation checks were introduced for this survey to ensure consistency between online and paper responses.

The number of responses that have been analysed nationally for each of the positive / negative questions are shown in Annex B.


When conducting a survey, it is important to have a representative sample of the population you are interested in. Applying weighting methods reduces potential bias by making the results more representative of the population.

Survey weights are numbers associated with the responses that specify the influence the various observations should have in the analysis. The final survey weight associated with a particular response can be thought of as a measure of the number of population units represented by that response.

A review of the weighting methodology was undertaken in 2017, leading to some changes in the weights applied. Details of the review, the full methodology applied to the results and the impacts of the change are available at

Results at all levels of reporting are weighted.

Backdating of Previous Surveys

A new weighting methodology was introduced in 2018. Figures from previous surveys were backdated in 2018 where appropriate to ensure comparisons over time are available.

Reports specifically relating to previous surveys will not be updated to include the backdated figures.

Percentage Positive and Negative

Per cent or percentage positive is frequently used in reporting results from this survey. This means the percentage of people who answered in a positive way. For example, when people were asked to rate the care provided by their GP practice, if they answered “Excellent” or “Good”, these have been counted as positive answers. Similarly those people who said their Care was “Poor” or “Very poor” have been counted as negative. Annex A details which answers have been classed as positive and negative for each question.

Percentage positive is mainly used to allow easier comparison rather than reporting results on the five point scale that people used to answer the questions. There is also a belief that differences between answers on a five point may be subjective. For example there may be little or no difference between a person who “strongly agrees” and one who “agrees” with a statement. In fact some people may never strongly agree or strongly disagree with any statements.

As described in Section 4 of this report, these results are based on a sample of patients and are therefore affected by sampling error. The effect of this sampling error is relatively small for the national estimates. However, when comparisons have been made in the analysis of the survey results, the effects of sampling error have been taken into account by the use of confidence intervals and tests for statistical significance. Only differences that are statistically significant are reported as differences within the analysis and all significance testing is carried out at the 5% level.

More information on confidence intervals, significance testing and how they’re calculated can be found at:

Quality Assurance of the National Report

A small group of Scottish Government policy leads were sent a draft version of the national report for quality assurance. Feedback included suggestions on ways in which to report data as well as comments about the context for the survey. These were taken into account in finalising the national report. In addition staff at Quality Health Ltd and Public Health Scotland carried out quality checks of figures used in the report.

Revisions to previous publications

A copy of our revisions policy is available at:



Back to top