Scottish Inpatient Patient Experience Survey 2012 Volume 2: Technical Report

This report provides technical information on the survey design, sampling, fieldwork and analysis for the Scottish Inpatient Patient Experience Survey 2012.

This document is part of a collection


8 Analysis and Reporting

Introduction

8.1 This section of the report presents detail on the approach to weighting the data, confidence intervals, significance testing and design effects.

8.2 In addition to the quality of the statistics, sources of error and bias are discussed.

Weighting the data

8.3 As the sampling was based on a stratified approach, weighting was applied to ensure that sample was reflective of the overall number of inpatients who were eligible to take part in the survey.

8.4 Estimates for Scotland and NHS Boards are weighted. Weighted results are calculated by weighting the result for each strata for each question by the relative number of eligible inpatients. The weight is calculated as the number of eligible inpatients (aged 16+ and therefore eligible for survey) as a proportion of the total number of eligible inpatients (Scotland or NHS Board). Weighting the results in this way provides more representative results because the contribution of each hospital, to the national or NHS Board average, is proportional to the number of eligible patients treated there.

8.5 There are other ways that the data could be weighted using the differences in the characteristics of those sampled. However based on ethical approvals for the survey, only the contractors had access to the demographic characteristics of the sample for performing the sample checks. This meant that weighting could not take into account differences in the characteristics of those sampled and those who responded.

8.6 The English inpatient survey gives equal weight to all NHS trusts when calculating national results. Equal weighting has not been given to NHS Boards. Giving NHS Boards equal weighting as was done for NHS Trusts in England would provide misleading results as it would give Greater Glasgow and Clyde's results (where there are over 80,000 inpatients annually) the same weight as Orkney's (where there are fewer than 1,000 inpatients annually). The effect of this type of weighting for the Scottish results would be to inflate the national results because the smaller boards generally achieve higher positive scores.

8.7 Results provided within the Better Together inpatient survey national report use weighted data unless otherwise stated.

Percentage positive and negative

8.8 As descried in paragraph 2.5, percentage positive is frequently used in the reporting which means the percentage of people who answered in a positive way. If people said they strongly agreed or agreed these have been counted as positive answers. If people said they disagreed or strongly disagreed, these have been counted as negative. Appendix B details which answers have been classed as positive and negative for each question.

8.9 Percentage positive is mainly used to allow easier comparison rather than reporting results on the five point scale that patients used to answer the questions. Another reason for doing this is that there may be little or no difference between a person who "strongly agrees" and one who "agrees" with a statement. In fact some people may never strongly agree or disagree with any statements. For those individual respondents that neither agreed nor disagreed, these have been classified as neutral.

Significance tests

8.10 As the national inpatient results are based on a survey of a sample of patients and not all hospital inpatients, the results are subject to sampling variability (information on sampling can be found in Chapter 4).

8.11 The survey used a disproportionately stratified (by site or sub-site specialty level) sample design with weights applied to estimate national averages. As described in 8.13, one of the effects of using stratification and weighting is that standard errors for survey estimates are generally higher than the standard errors that would be derived from an unweighted simple random sample of the same size. The calculations of standard error and comments on statistical significance have taken the weighting and stratification into account.

8.12 Comparisons with last year's percentage positive results are discussed on the basis that differences are statistically significant (at the 5% level). The normal approximation to the binomial theorem was used for this. This approach is equivalent to constructing a 95% confidence for the difference between the results, and if this confidence interval does not contain 0 then the result is statistically significant at the 5% level.

Design effects

8.13 One of the effects of using stratification and weighting is that standard errors (measure of sampling error) for survey estimates are generally higher than the standard errors that would be derived from an unweighted simple random sample of the same size.

8.14 The design effect is the ratio between the variance (average deviation of a set of data points from their mean value) of a variable under the sampling method used (actual) and the variance computed under the assumption of simple random sampling (standard). In short, a design effect of 2 would mean doubling the size of a simple random sample to obtain the same volume of information; a design effect of 0.5 implies the reverse. Design effect adjustments are necessary when adjusting standard errors which are affected by the design of the survey.

8.15 Generally speaking, disproportionate stratification and sampling with non-equal probabilities tends to increase standard errors, giving a design effect greater than 1. The sampling design of the inpatient survey meets the criteria above in that disproportionate stratification is applied across the hospital sites and sub-site specialties. As a result, one would expect the design effect to be above 1 although only modestly so.

8.16 The standard errors used for tests for statistical significance take into account the design effects.

Inclusions and Exclusions

8.17 The national results exclude the 125 NHS patients treated in a private hospital, of which 73 responded.

8.18 The reports for individual boards excluded NHS patients treated in a private hospital. National Waiting Times Centre (Golden Jubilee hospital) patients who answered "Yes" to question 2 had their responses treated as invalid and their response to questions 3, 4, 5 and 16 were suppressed because this hospital has no A&E, HDU or ICU.

8.19 Reports for NHS Boards and hospitals are only produced if there are 50 or more responses. If a particular question had less than 30 responses, the results for that question were suppressed.

8.20 For the hospital reports, results for the A&E section have not been shown for hospitals without an A&E department or a minor injury unit.[8]

Analysis

8.21 The survey data collected and coded by Patient Perspective and Quality Health were securely transferred to ISD via NHS.net and analysed using the statistical software package SPSS.

8.22 The analysis produced by ISD was transferred to the Scottish Government for inclusion in the national report.

Scotland Performs Healthcare Experience Indicator

8.23 The Healthcare Experience Indicator has been developed to measure the reported experience of people using the NHS. It is one of the 50 National Indicators in the National Performance Framework, which sets out the Government's outcomes based approach. Progress is reported in Scotland Performs: http://www.scotland.gov.uk/About/scotPerforms.

8.24 The indicator is based on the reported experience from hospital inpatients, as a proxy for experience across the NHS. This has been chosen because: (a) the quality of hospital care is very important to people; (b) the indicator involves the transitions to and from hospital, which depend on health and care services in the community; and (c) it includes the feedback of inpatients on experience in A&E which should reflect a much wider population of users and is an indicator of the system.

8.25 The indicator is calculated by taking the mean scores for individual patients' answers on the following questions in the inpatient survey and weighting them using total inpatient numbers to get a national score:

  • Overall, how would you rate your admission to hospital (i.e. the period after you arrived at hospital but before you were taken to the ward)?
  • Overall, how would you rate the care and treatment you received during your time in the Emergency Department / Accident and Emergency?
  • Overall, how would you rate the hospital environment?
  • Overall, how would you rate your care and treatment during your stay in hospital?
  • Overall, how would you rate all the staff who you came into contact with?
  • Overall, how would you rate the arrangements made for you leaving hospital?

8.26 The score for each question for each patient is: 0 for very poor; 25 for poor; 50 for fair; 75 for good; 100 for excellent.

8.27 The mean of a patient's scores for the six questions is used rather than the sum because not all patients will have answered every question. The methodology will result in an indicator between 0 and 100 which is reported to one decimal place (Table 5).

Table 5 Example of how an individual patient's answers are converted into a score for the Healthcare Experience Indicator

Question Very poor (0) Poor (25) Fair (50) Good (75) Excellent (100) Score
Q5 Overall, how would you rate the care and treatment you received during your time in A&E?' -
Q10 Overall how would you rate your admission to hospital? tick 75
Q13 Overall, how would you rate the hospital environment? tick 75
Q17 Overall, how would you rate your care and treatment during your stay in hospital? tick 100
Q21 Overall, how would you rate all the staff who you came into contact with? tick 100
Q24 Overall, how would you rate the arrangements made for you leaving hospital? tick 100
Patient Score = (75+75+100+100+100)/5 = 90

8.28 The analysis was done using the SAS procedure proc surveymeans which calculates sampling errors of estimators based on complex sample designs.

Quality Outcome Indicator

8.29 Twelve national Quality Outcome Indicators show progress towards the ambitions of the Quality Strategy. One of these indicators is Healthcare Experience. This indicator combines the Scotland Performs Healthcare Experience Indicator described above, with data from the Patient Experience Survey of GP and other local NHS services.

8.30 The indicator is calculated by taking the mean of the Scotland Performs Healthcare Experience Indicator and an indicator using data a survey of people registered with a GP practice. The latest value of the Healthcare Experience Quality Outcome Indicator is based on the 2012 Inpatient Survey and the 2011/12 Patient Experience Survey of GP and other local NHS services[9].

8.31 The GP practice component of the indicator is calculated by taking the mean scores for individual patients' answers on the following questions and weighting them using GP practice populations to get a national score. As for the Healthcare Experience Indicator, the score for each question for each patient is: 0 for very poor; 25 for poor; 50 for fair; 75 for good; 100 for excellent.

  • Overall how would you rate the arrangements for getting to see a doctor/and or nurse in your GP surgery? As there are separate questions about doctors and nurses the mean score of the answers is used.
  • Overall, how would you rate the care provided by your GP surgery?

8.32 The analysis was done using the SAS procedure proc surveymeans which calculates sampling errors of estimators based on complex sample designs.

8.33 The standard error of the indicator is calculated by combining the standard errors of the inpatient and GP components.

Quality of these statistics - Sources of bias and other errors

Non-response bias

8.34 The greatest source of bias in the survey estimates is due to non-response. Non-response bias will affect the estimates if the experiences of respondents differ from those of non-respondents.

8.35 Although the contractors had access to the demographic characteristics of the sample for performing the sample checks, unfortunately we did not have access to this information. This means that we do not know if the survey respondents have different characteristics to those that did not respond, but there is evidence that they do. From the GP Patient Experience Survey we know that some groups (e.g. men and younger people) are less likely to respond to the survey. We also know that there are differences in the experiences of different groups (e.g. younger people tend to be less positive about their experiences and women tend to be less positive). An example of the effects of this type of bias is that with more older people responding, who are generally more positive, the estimates of the percentage of patients answering positively will be slightly biased upwards. Another example is that with more women responding, who are generally less positive, the estimates of the percentage of patients answering positively will be slightly biased downwards.

8.36 The comparisons between different years of the survey should not be affected by non-response bias as the characteristics of the sample are similar for each year.

8.37 Some non-response bias is adjusted for by weighting the results. The response rates differ between hospitals, but weighting the results by patient numbers means that hospitals with lower response rates are not under-represented in the national results.

Sampling error

8.38 The results are affected by sampling error. However due to the large sample size the effect of sampling error is very small for the national estimates. Confidence intervals (95%) for the percentage of patients responding positively to a particular statement are generally less than +/- 1%.

8.39 When comparisons have been made, the effects of sampling error are taken into account by the tests for statistical significance. Only differences that are statistically significant, that is that they are unlikely to have occurred by random variation, are reported as differences.

Other sources of bias

8.40 There are potential differences in the expectations and perceptions of patients with different characteristics. Patients with higher expectations will likely give less positive responses. Similarly patients will perceive things in different ways which may make them more or less likely to respond positively. When making comparisons between NHS Boards it should be remembered that these may be affected by differences in patient characteristics.

8.41 There are some questions that are potentially affected by patients who do not see it as being relevant to them answering "neither agree nor disagree" instead of "not relevant". An example of this type of question is "I got help with eating or drinking when I needed it". The effect of this is to reduce the percentage of patients answering positively. The answer scale was cognitively tested and participants were happy with the "neither agree nor disagree" option.

8.42 These other sources of bias should not affect comparisons between years.

8.43 In interpreting the results, consideration should also be given to the varying size of NHS Boards in Scotland. For example NHS Orkney as an island board has one hospital with an annual inpatient population of 705 at the time of the survey, whereas NHS Greater Glasgow and Clyde as a large board has 16 hospitals with an inpatient population of 88,382. Across Boards there is a large variation in geographic coverage, population sizes and hospital sites as well as hospital type which should be borne in mind when reviewing survey findings. For example, the results by type of hospital showed that both community and general hospitals were generally more positive than other hospital types. This means that where there is a greater mix of these types of hospitals within boards, results may be more positive.

Contact

Email: Gregor Boyd

Back to top