Scottish Inpatient Patient Experience Survey 2014 Volume 2: Technical Report

This report provides technical information on the survey design, sampling, fieldwork and analysis for the Scottish Inpatient Patient Experience Survey 2014.

This document is part of a collection


8 Analysis and Reporting

Introduction

  • This section of the report presents detail on the approach to weighting the data, confidence intervals, significance testing and design effects. In addition to the quality of the statistics, sources of error and bias are discussed.

Weighting the data

  • As the sampling was based on a stratified approach, weighting was applied to ensure that the sample was reflective of the overall number of inpatients who were eligible to take part in the survey.
  • Estimates for Scotland and NHS Boards are weighted. Weighted results are calculated by weighting the result for each stratum for each question by the relative number of eligible inpatients. The weight is calculated as the number of eligible inpatients (aged 16+ and therefore eligible for survey) as a proportion of the total number of eligible inpatients (Scotland or NHS Board). Weighting the results in this way provides more representative results because the contribution of each hospital, to the national or NHS Board average, is proportional to the number of eligible patients treated there.
  • There are other ways that the data could be weighted using the differences in the characteristics of those sampled. However historically this survey has been weighted to allow for the different sizes of the strata alone and continuing to use this methodology allows for consistent comparison of the weighted results between the surveys. If weighting to take account of differences in the characteristics of those sampled and responding was used then the results could not be validly compared with previous surveys and trends over time could not be assessed.
  • The English inpatient survey gives equal weight to all NHS trusts when calculating national results. Equal weighting has not been given to NHS Boards within the Scottish survey. Giving NHS Boards equal weighting as was done for NHS Trusts in England would provide misleading results as it would give Greater Glasgow and Clyde's results (where there are over 80,000 inpatients annually) the same weight as Orkney's (where there are fewer than 1,000 inpatients annually). The effect of this type of weighting for the Scottish results would be to inflate the national results because the smaller boards generally achieve higher positive scores.
  • Results provided within the Patient Inpatient Experience Survey national report use weighted data unless otherwise stated.

Percentage positive and negative

  • Percentage positive is frequently used in the reporting, this means the percentage of people who answered in a positive way. If people said they strongly agreed or agreed these answers have been counted as positive answers. If people said they disagreed or strongly disagreed, these have been counted as negative. Appendix B details which answers have been classed as positive and negative for each question.
  • Percentage positive is mainly used to allow easier comparison rather than reporting results on the five point scale that patients used to answer the questions. Another reason for doing this is that there may be little or no difference between a person who "strongly agrees" and one who "agrees" with a statement. In fact some people may never strongly agree or disagree with any statements. For those individual respondents that neither agreed nor disagreed, these have been classified as neutral.
  • A slight modification to the board and hospital level reporting this year was to separate those responding in a positive way into 'very positive' and 'positive'. This change was made to give additional detail to the reporting.
  • Appendix B details which answers have been classed as very positive and positive for each question (both are in the positive percentage column, with the very positive option given first).

Significance tests

  • As the national inpatient results are based on a survey of a sample of patients and not all hospital inpatients, the results are subject to sampling variability (information on sampling can be found in Chapter 4).
  • The survey used a disproportionately stratified (by site or sub-site specialty level) sample design with weights applied to estimate national or NHS Board averages. As described in 8.13, one of the effects of using stratification and weighting is that standard errors for survey estimates are generally larger than the standard errors that would be derived from a simple random sample of the same size. The calculations of standard error and comments on statistical significance have taken the stratification into account.
  • Comparisons with last year's percentage positive results are discussed on the basis that differences are statistically significant (at the 5% level). The normal approximation to the binomial theorem was used for this. This approach is equivalent to constructing a 95% confidence for the difference between the results, and if this confidence interval does not contain 0 then the result is statistically significant at the 5% level.

Design effects

  • One of the effects of using a stratified random sample is that standard errors (measure of sampling error) for survey estimates are generally larger than the standard errors that would be derived from a simple random sample of the same size.
  • The design effect is the ratio between the variance (average deviation of a set of data points from their mean value) of a variable under the sampling method used (actual) and the variance computed under the assumption of simple random sampling (standard). In short, a design effect of 2 would mean doubling the size of a simple random sample to obtain the same volume of information; a design effect of 0.5 implies the reverse. Design effect adjustments are necessary when adjusting standard errors which are affected by the design of the survey.
  • Generally speaking, disproportionate stratification and sampling with non-equal probabilities tends to increase standard errors, giving a design effect greater than 1. The sampling design of the inpatient survey meets the criteria above in that disproportionate stratification is applied across the hospital sites and sub-site specialties. As a result, one would expect the design effect to be above 1 although only modestly so.
  • The standard errors used for tests for statistical significance take into account the design effects.

Inclusions and Exclusions

  • A description of patients excluded from sampling is provided in paragraph 4.8.
  • For hospital reports, results for the A&E section have not been shown for hospitals without an A&E department or a minor injury unit.
  • Reports for NHS Boards and hospitals are only produced if there are 50 or more responses. If a particular question had less than 30 responses, the results for that question were suppressed.

Analysis

  • The survey data collected and coded by Quality Health were securely transferred to ISD via secure FTP and analysed using the statistical software package SPSS.
  • The analysis produced by ISD was transferred to the Scottish Government for inclusion in the national report.

Scotland Performs Healthcare Experience Indicator

  • The Healthcare Experience Indicator has been developed to measure the reported experience of people using the NHS. It is one of the 50 National Indicators in the National Performance Framework, which sets out the Government's outcomes based approach. Progress is reported in Scotland Performs: http://www.gov.scot/About/Performance/scotPerforms
  • The indicator is based on the reported experience from hospital inpatients, as a proxy for experience across the NHS. This has been chosen because: (a) the quality of hospital care is very important to people; (b) the indicator involves the transitions to and from hospital, which depend on health and care services in the community; and (c) it includes the feedback of inpatients on experience in A&E which should reflect a much wider population of users and is an indicator of the system.
  • The indicator is calculated by taking the mean scores for individual patients' answers on the following questions in the inpatient survey and weighting them using total inpatient numbers to get a national score:
  • Overall, how would you rate your admission to hospital (i.e. the period after you arrived at hospital until you got to a bed on the ward)?
  • Overall, how would you rate the care and treatment you received during your time in the A&E?
  • Overall, how would you rate the hospital and ward environment?
  • Overall, how would you rate your care and treatment during your stay in hospital?
  • Overall, how would you rate all the staff you came into contact with?
  • Overall, how would you rate the arrangements made for your leaving hospital?
  • The score for each question for each patient is: 0 for very poor; 25 for poor; 50 for fair; 75 for good; 100 for excellent.
  • The mean of a patient's scores for the six questions is used rather than the sum because not all patients will have answered every question. The methodology will result in an indicator between 0 and 100 which is reported to one decimal place (Table 7).

Table 7 Example of how an individual patient's answers are converted into a score for the Healthcare Experience Indicator

Question Very poor
(0)
Poor
(25)
Fair
(50)
Good
(75)
Excellent
(100)
Score
Q9 Overall, how would you rate the care and treatment you received during your time in A&E? -
Q13 Overall, how would you rate your admission to hospital
(i.e. the period after you arrived at hospital until you got to a bed on the ward)?
75
Q20 Overall, how would you rate the hospital and ward environment? 75
Q34 Overall, how would you rate your care and treatment during your stay in hospital? 100
Q49 Overall, how would you rate all the staff you came into contact with? 100
Q60 Overall, how would you rate the arrangements made for your leaving hospital? 100
Patient Score = (75+75+100+100+100)/5 = 90
  • The analysis was done using the SAS procedure proc surveymeans which calculates sampling errors of estimators based on complex sample designs.

Quality Outcome Indicator

  • Twelve national Quality Outcome Indicators show progress towards the ambitions of the Quality Strategy. One of these indicators is Healthcare Experience. This indicator combines the Scotland Performs Healthcare Experience Indicator described above, with data from the Health and Care Experience Survey.
  • The indicator is calculated by taking the mean of the Scotland Performs Healthcare Experience Indicator and an indicator using data from a survey of people registered with a GP practice. The latest value of the Healthcare Experience Quality Outcome Indicator is based on the 2014 Inpatient Survey and the 2013/14 Health and Care Experience Survey[6].
  • The GP practice component of the indicator is calculated by taking the mean scores for individual patients' answers on the following questions and weighting them using GP practice populations to get a national score. As for the Healthcare Experience Indicator, the score for each question for each patient is: 0 for very poor; 25 for poor; 50 for fair; 75 for good; 100 for excellent.
  • Overall how would you rate the arrangements for getting to see a doctor/and or nurse in your GP surgery? As there are separate questions about doctors and nurses the mean score of the answers is used.
  • Overall, how would you rate the care provided by your GP surgery?
  • The analysis was done using the SAS procedure proc surveymeans which calculates sampling errors of estimators based on complex sample designs.
  • The standard error of the indicator is calculated by combining the standard errors of the inpatient and GP components.

Quality of these statistics - Sources of bias and other errors

Non-response bias

  • The greatest source of bias in the survey estimates is due to non-response. Non-response bias will affect the estimates if the experiences of respondents differ from those of non-respondents.
  • From the Health and Care Experience Survey we know that some groups (e.g. men and younger people) are less likely to respond to the survey. We also know that there are differences in the experiences of different groups (e.g. younger people and women tend to be less positive about their experiences). An example of the effects of this type of bias is that with more older people responding, who are generally more positive, the estimates of the percentage of patients answering positively will be slightly biased upwards. Another example is that with more women responding, who are generally less positive, the estimates of the percentage of patients answering positively will be slightly biased downwards.
  • The comparisons between different years of the survey should not be affected by non-response bias as the characteristics of the sample are similar for each year.
  • Some non-response bias is adjusted for by weighting the results. The response rates differ between hospitals, but weighting the results by patient numbers means that hospitals with lower response rates are not under-represented in the national results.

Sampling error

  • The results are affected by sampling error. However due to the large sample size the effect of sampling error is very small for the national estimates. Confidence intervals (95%) for the percentage of patients responding positively to a particular statement are generally less than +/- 1%.
  • When comparisons have been made, the effects of sampling error have been taken into account in the tests for statistical significance. Only differences that are statistically significant, that is that they are unlikely to have occurred by random variation, are reported as differences.

Other sources of bias

  • There are potential differences in the expectations and perceptions of patients with different characteristics. Patients with higher expectations will likely give less positive responses. Similarly patients will perceive things in different ways which may make them more or less likely to respond positively. When making comparisons between NHS Boards it should be remembered that these may be affected by differences in patient characteristics.
  • There are some questions that are potentially affected by patients who do not see it as being relevant to them answering "neither agree nor disagree" instead of "not relevant". An example of this type of question is "I got enough help with eating or drinking when I needed it". The effect of this is to reduce the percentage of patients answering positively. The answer scale was cognitively tested and participants were happy with the "neither agree nor disagree" option.
  • These other sources of bias should not affect comparisons between years.
  • In interpreting the results, consideration should also be given to the varying size of NHS Boards in Scotland. For example NHS Orkney as an island board has one hospital with a six months inpatient population of 470 at the time of the survey, whereas NHS Greater Glasgow and Clyde as a large board has 16 hospitals with an inpatient population of 45,036 for the same six months period. Across Boards there is a large variation in geographic coverage, population sizes and hospital sites as well as hospital type which should be borne in mind when reviewing survey findings. For example, the results by type of hospital showed that both community and general hospitals were generally more positive than other hospital types. This means that where there is a greater mix of these types of hospitals within boards, results may be more positive.

Contact

Email: Andrew Paterson

Back to top