Patient Experience survey of GP and local nhs services 2011/12 Volume 1: Technical Report

Scottish Patient Experience Survey of GP and Local NHS Services 2011/12. This is a postal survey which was sent to a random sample of patients who were registered with a GP in Scotland in October 2011. This report contains details the survey design and development.

This document is part of a collection


9 Analysis and Reporting

Introduction to analysis

9.1 The survey data collected and coded by Picker Institute Europe and Ciconi Ltd were securely transferred to ISD Scotland, where the information was analysed using the statistical software package SPSS version 17.0.

Reporting patient gender

9.2 Analysis of survey response rates by gender was done using the gender of the sampled patients, according to their CHI record.

9.3 For all other analyses by gender, where survey respondents had reported a valid gender in response to question 39, this information has been used in reporting. Where the respondents did not answer the question or gave an invalid response, gender information from the sampled patient's CHI record was used.

9.4 Self-reported gender was used where possible as in a small proportion of responses the reported information and the information on CHI differed. The most likely reason for this is that the questionnaire was sent to one patient but was completed by or on behalf of another one registered to the same practice (e.g. a recipient passing their questionnaire to a spouse).

9.5 In total, 143,696 responders (98.7%) provided a valid response to the question on gender (question 39). Of these, there was a difference between self-reported gender of the respondent and the gender of the originally sampled patient in 2,043 cases (1.4%). Amongst this group it was more frequently the case that a survey questionnaire originally sent to a male was responded to by a female (n = 1,161), than it was that a questionnaire sent to a female was answered by a male (n = 882). As practice contact rates are generally higher in females than males, one possible reason for this is that some male survey recipients may not have been to their practice for some time and passed their questionnaire to a female member of their household.

Reporting patient age

9.6 Analysis of survey response rates by age was done using the age of the sampled patients, according to their CHI record at the time of data extraction (17 October 2011).

9.7 For all other analyses by age where survey respondents had reported a valid age in response to question 40, this information has been used in reporting. Where the respondents did not answer the question or gave an invalid response, age information from the sampled patient's CHI record was used.

9.8 Valid age was taken to be anything between 17 and 108 years. When reporting question results by age group, a small proportion of cases where age was reported as 16 or less were treated as invalid responses to the question. However, it is likely that in at least some of these instances the respondents were giving their feedback about their experience at the practice when making an appointment for their child, and in doing so reported the child's age rather than their own.

9.9 Self-reported age was used where possible in preference to age derived from the CHI record as in a proportion of responses the reported information and the information on CHI differed. Reasons for this include the questionnaire being sent to one patient but being completed by or on behalf of another one registered to the same practice (e.g. a recipient passing their questionnaire to a family member or spouse). In some of these instances, where the survey recipient and another member of their household had exactly the same forename(s) and surname (e.g. a father and son), the questionnaire may have been answered by the namesake of the individual sent the questionnaire.

9.10 In total, 139,489 responders (95.8%) provided a valid response to the question on age at last birthday (question 40). Of these, the self-reported age and the age calculated from the CHI record differed by two or more years in 3,494 cases (2.5%). In a further 15,919 cases (11.4%) there was a difference of one year. This is not unexpected, however, as many recipients would have had a birthday between 17th October 2011 and the date they responded to their questionnaire (November 2011 - January 2012).

9.11 In many instances where the age calculated from the CHI record differed from the age reported by the survey respondents, the associated age group used for analysis remained the same, whether based on CHI or based on the survey response. In 3,956 cases the record was however counted under a different age group for response rate analysis to the one used for all other analyses. Of these, 2,543 (64.3%) were in an older group for the main analysis of results than for analysis of response rates (Table 11). Some of this relates to individual recipients having a birthday and "moving up" by a single age group. In other instances this reflects the respondent being a different individual to the person sent the questionnaire and being more likely to be somewhat older than the originally sampled patient; older people were more likely to respond to the survey than younger people.

Table 11 Where reported age and CHI age groups are different

Age group calculated from survey responses (Nov 2011 - Jan 2012) Age group calculated from CHI record as at 17th October 2011 Total
17 - 24 25 - 34 35 - 44 45 - 54 55 - 64 65 - 74 75 and over
17 - 24 0 37 17 38 19 24 13 148
25 - 34 118 0 103 28 18 19 26 312
35 - 44 38 181 0 173 61 114 32 599
45 - 54 97 55 239 0 191 53 115 750
55 - 64 34 101 66 373 0 155 19 748
65 - 74 25 29 86 54 520 0 158 872
75 and over 16 24 36 63 36 352 0 527
Total 328 427 547 729 845 717 363 3956

Reporting deprivation and urban/rural status

9.12 Patient postcodes were used to match records to deprivation and urban/rural status information as defined by the Scottish Government. The versions used were:-

9.13 A small minority of records were not matched to deprivation or urban/rural information, for example because the postcodes were not valid or recognised by the reference files used in the matching. Table 12 below shows the numbers and percentages of records that were not assigned to a deprivation or urban/rural category.

Table 12 Patients that could not be assigned urban/rural or deprivation categories

All responders Sampled patients
n % n %
Urban/Rural: Patient not assigned to a classification 817 0.56 4,498 0.74
Deprivation: Patient not assigned to a quintile 445 0.31 2,228 0.37

Number of responses analysed

9.14 The number of responses that have been analysed for each question is often lower than the total number of responses received. This is because not all of the questionnaires that were returned could be included in the calculation of results for every individual question. In each case this was for one of the following reasons:-

  • The specific question did not apply to the respondent and so they did not answer it. For example if their GP had not referred them to see any other health professionals in the last 12 months, they did not therefore have to rate the arrangements for getting to see another NHS health professional.
  • The respondent did not answer the question for another reason (e.g. refused). Patients were advised that if they did not want to answer a specific question they should leave it blank
  • The respondent answered that they did not know or could not remember the answer to a particular question
  • The respondent gave an invalid response to the question, for example they ticked more than one box where only one answer could be accepted.

9.15 The number of responses that have been analysed nationally for each of the percent positive questions is shown in Annex A.

Weighting

9.16 Results at Scotland, NHS Board and CHP level are weighted. Weighted results were calculated by first weighting each GP Practice result for each question by the relative practice size. The weighted practice results were then added together to give an overall weighted percentage at Scotland, NHS Board and CHP level. The weight for each practice is calculated as the practice patient list size (of patients aged 17+ and therefore eligible for being included in the survey sample) as a proportion of the entire population (Scotland, NHS Board or CHP) of patients eligible for inclusion in the survey. Many of the questions in the survey relate to the specific practice that the patient attended during 2011/12. Therefore, weighting the results in this way provides results more representative of the population (at Scotland, NHS Board or CHP level) than would be the case if all practices (large and small) were given equal weighting in the calculation of aggregate results.

Percentage positive and negative

9.17 Percent or percentage positive is a term frequently used in the reporting. This means the percentage of people who answered in a positive way. For example, when people were asked how helpful the receptionists are, if people said 'very helpful' or 'fairly helpful', these have been counted as positive answers. Similarly those patients who said they found the receptionists 'not very helpful' or 'not at all helpful' have been counted as negative. Annex A details which answers have been classed as positive and negative for each question.

9.18 Percentage positive is mainly used to allow easier comparison rather than reporting results on the five point scale that patients used to answer most of the questions. There is also a belief that differences between answers on a five point may be subjective. For example there may be little or no difference between a person who "strongly agrees" and one who "agrees" with a statement. In fact some people may never strongly agree or strongly disagree with any statements.

Quality of these statistics - Sources of bias and other errors

Sampling error

9.19 It should be kept in mind that because the results are based on a survey of sampled patients and not the complete population of Scotland, the results are affected by sampling error. More information on sampling can be found in Chapter 4. However due to the large sample size the effect of sampling error is very small for the national estimates. Confidence intervals (95%) for the percentage of patients responding positively to a particular statement are generally less than +/- 1%.

9.20 When comparisons have been made, the effects of sampling error are taken into account by the tests for statistical significance. Only differences that are statistically significant, that is that they are unlikely to have occurred by random variation, are reported as differences.

Non-response bias

9.21 The greatest source of bias in the survey estimates is due to non-response. Non-response bias will affect the estimates if the experiences of respondents differ from those of non-respondents.

9.22 We know that some groups (e.g. men and younger people) are less likely to respond to the survey. This is partly explained by the fact that men and younger people are less likely to visit their GP practice. We also believe that there are differences in the reported experiences of different groups (e.g. from the Scottish Inpatient Patient Experience Survey we found that younger people tend to be less positive about their experiences and women tend to be less positive8). An example of the effects of this type of bias is that with more older people responding, who are generally more positive, the estimates of the percentage of patients answering positively will be slightly biased upwards. Another example is that with more women responding, who are generally less positive, the estimates of the percentage of patients answering positively will be slightly biased downwards.

9.23 The comparisons between different years of the survey should not be greatly affected by non-response bias as the characteristics of the sample are reasonably similar for each year.

9.24 Some non-response bias is adjusted for by weighting the results. The response rates differ between GP practices, but weighting the results by patient numbers means that GP practices with lower response rates are not under-represented in the national results. Results could have been weighted by patient factors such as age and gender.

Other sources of bias

9.25 There are potential differences in the expectations and perceptions of patients with different characteristics. Patients with higher expectations will likely give less positive responses. Similarly patients will perceive things in different ways which may make them more or less likely to respond positively. When making comparisons between NHS Boards it should be remembered that these may be affected by differences in patient characteristics. This should not affect comparisons between years.

Sample design

9.26 The survey used a stratified sample design rather than a simple random sample approach. Those included in a simple random sample are chosen randomly by chance giving an equal probability of being selected. Simple random samples can be highly effective if all subjects return a survey; giving precise estimates and low variability. However, simple random samples are expensive and can not guarantee that all groups are represented proportionally in the sample.

9.27 Stratified sampling involves separating the eligible population into groups (i.e. strata) and then assigning an appropriate sample size to each group to ensure that a representative sample size is taken. This survey was stratified by GP practice and was based on a disproportionate stratified sample design because the sampling fraction was not the same within each of the practices. Some practices were over-sampled relative to others (i.e. had a higher proportion of their patients included in the sample) in order to achieve the minimum number of responses required for analysis (please see Chapter 4 for more information on the sample size calculation).

Design factor

9.28 Results at national, NHS Board and CHP level were weighted by relative size of each practice (stratum). One of the effects of using stratification and weighting is that this can result in standard errors for survey estimates being generally higher than the standard errors that would be derived from an unweighted simple random sample of the same size.

9.29 Features of using a disproportionate stratified sampling design can affect the standard errors that are used to calculate confidence intervals and test statistics. Calculating the design factor (Deft) can tell us by how much the standard error is increased or decreased compared to a simple random sample design, given the design that we have used. The design factor has been incorporated into the confidence interval calculations at national, NHS Board and CHP level.

Design effect

9.30 The design effect (Deff) is the square of the design factor and can tell us how much information we have gained or lost by using a complex survey design rather than a simple random sample.9 For example, a design effect of two would mean that we would need to have a survey that is twice the size of a simple random sample to obtain the same volume of information and precision of a simple random sample. A design effect of 0.5 would mean that we would gain the precision from a complex survey of only half the size of a simple random sample. The design effect has been incorporated into the test statistic calculations at national, NHS Board and CHP level.

Confidence Intervals

9.31 The reported results for the percentages of patients answering positively have been calculated from the patients sampled. As with any sample, if we had asked a different group of patients, we could have ended up with a different result.

9.32 Confidence intervals provide a way of quantifying this sampling uncertainty. A 95% confidence interval gives a range that we can be 95% sure that it contains the "true" result i.e. the results we would have obtained had we asked the same question to all of the practice's patients.

9.33 If, for example, the percentage positive result for a particular question is 80% and the confidence interval is +/-5%, this means we are 95% sure that the result should be between 75% and 85%.

9.34 Confidence intervals have been calculated for the percent positive results of question 25 (the overall rating of care provided by the GP surgery) by NHS Board in Table 13. This provides an example of the accuracy of the estimates provided in Table 14 of the national report. More details on this calculation are available in Annex D.

Table 13 Percentage of patients rating the overall care provided by their GP surgery as excellent or good, by NHS Board, with 95% confidence intervals

NHS Board Percentage 95% confidence interval
Lower Limit Upper Limit
Ayrshire and Arran 88 86.8 88.5
Borders 90 88.8 91.2
Dumfries and Galloway 92 91.1 93.1
Fife 86 85.3 87.1
Forth Valley 88 87.2 88.9
Grampian 89 88.2 89.6
Greater Glasgow and Clyde 90 89.2 90.0
Highland 91 90.3 91.6
Lanarkshire 85 84.7 86.1
Lothian 88 87.3 88.4
Orkney 96 94.8 97.6
Shetland 84 81.4 87.3
Tayside 90 89.6 91.0
Western Isles 92 90.7 94.2
Scotland 89 88.37 88.77

9.35 Confidence intervals for the results of all percentage positive questions will be published on the Scottish Government website at national, NHS Board and CHP level.

Tests for Statistical Significance

9.36 A result can be described as statistically significant if it is unlikely to have occurred by random variation. Testing for statistical significance allows us to assess whether there have been significant increases or decreases in performance between one time period and another. Similarly it can allow us to assess whether a result for an NHS Board or CHP is significantly higher or lower than the result for Scotland as a whole. The effects of sampling error are taken into account by the tests for statistical significance.

9.37 Where possible, comparisons with percent positive results from the 2009/10 GP patient experience survey have been made at NHS Board, CHP and practice level within individual reports. Scores which have significantly improved since the 2009/10 survey have been reported as Plus sign. Scores which have significantly worsened since the 2009/10 survey have been reported as Minus sign. Reports at NHS Board, CHP and practice level are available at http://surveyresults.bettertogetherscotland.com/.

9.38 Comparisons with the 2009/10 percentage positive results at national level are discussed within the national report on the basis that differences are statistically significant.

9.39 Comparisons with the 2011/12 national (i.e. Scotland) percent positive results have also been made at NHS Board, CHP and practice level and can be found within the individual reports. Differences which are statistically significant are shown as Plus sign where the percent positive score is significantly higher than the national average; and Minus sign where the percent positive score is significantly lower than the national average.

9.40 All significance testing was carried out at 5% level. The normal approximation to the binomial theorem was used for this. This approach is equivalent to constructing a 95% confidence for the difference between the results.

9.41 As discussed in section 9.26, when calculating the test statistics at national, NHS Board and CHP level, the standard error has been multiplied by the design factor (Deft).

9.42 More details on tests for statistical significance are available in Annex E.

Outcomes of NHS treatment indicator

9.43 The Quality Strategy emphasises the importance of measurement, and a Quality Measurement Framework has been developed10 in order to provide a structure for describing and aligning the wide range of measurement work with the Quality Ambitions and Outcomes. As part of this framework, 12 national Quality Outcome Indicators have been identified, which are intended to show national progress towards achievement of the Quality Ambitions.

9.44 One of these twelve Quality Outcome Indicators relates to Patient Reported Outcomes. This is reported in Chapter 10 of the national report.

9.45 An average score is calculated for each respondent based on the outcomes questions they have answered. (Patients answering none of the three questions are not included.) These average scores are weighted by the number of patients registered at each GP practice to give scores for NHS Boards and Scotland.

9.46 The three outcomes questions and how the responses were scored are presented below.

  • In the last 12 months, have you received NHS treatment or advice because of something that was affecting your ability to do your usual activities? …how would you describe the effect of the treatment on your ability to do your usual activities?

Table 14 Scores for outcomes for something affecting ability to undertake usual activities

I was able to go back to most of my usual activities 100
There was no change in my ability to do my usual activities 50
I was less able to do my usual activities 0
It is too soon to say Don't include
  • In the last 12 months, have you received NHS treatment or advice because of something that was causing you pain or discomfort? …how would you describe the effect of the treatment on your pain or discomfort?

Table 15 Scores for outcomes for something causing pain or discomfort

It was better than before 100
It was about the same as before 50
It was worse than before 0
It is too soon to say Don't include
  • In the last 12 months, have you received NHS treatment or advice because of something that was making you feel depressed or anxious? …how would you describe the effect of the treatment on how you felt?

Table 16 Scores for outcomes for something making patients feel depressed or anxious

I felt less depressed or anxious than before 100
I felt about the same as before 50
I felt more depressed or anxious than before 0
It is too soon to say Don't include

Quality assurance of the national report

9.47 A small group of Scottish Government policy leads, a clinical lead and the Chair of the Scottish Council of the Royal College of General Practitioners (RCGP)11 were sent a draft version of the national report for quality assurance. Feedback included suggestions on ways in which to report data as well as comments about the context for the survey. These were taken into account in finalising the national report. In addition ISD Scotland carried out quality checks of all figures used in the report.

Contact

Email: Gregor Boyd

Back to top