Developing a Mental Health Experience of Services (MHES) Survey for Scotland

This report explores and summarises the requirements from and options for a National Mental Health Experience of Services (MHES) survey. The MHES would gather regular data on service user experiences and help inform standards measurement and service improvement, policy development and NHS reform.


Chapter Three: Survey scope and sampling

This chapter presents an analysis of views on possible sampling approaches for a MHES survey. This includes:

  • Who could be asked to complete the survey.
  • Which services could be covered.
  • Information needed to inform the sampling strategy.
  • Sample size.

Chapter summary

Key Findings and Further Considerations

  • A Mental Health Experiences of Services (MHES) survey should aim to include a wide range of experiences and voices. This includes individuals who have accessed services and those who have attempted to access them. Ideally, every service user should be invited to participate, rather than selecting a sample.
  • However, it was acknowledged that such a broad scope may not be feasible. Expanding the survey to include carers, family members, and friends of people with mental health conditions was also suggested.
  • The focus of the survey needs careful consideration, balancing the desire to understand service user needs with practical limitations.
  • There was no consensus on which services should be included. Suggestions ranged from focusing solely on inpatient services to including all mental health services.
  • A practical approach may be to begin with a focused pilot survey targeting inpatient services and adult secondary community services that have adopted the Mental Health Core Standards. While this would provide a clearly defined data set, it may limit insights into the broader mental health service landscape.
  • Sampling strategies will depend on which services and service users are included. Challenges were noted in accessing existing databases to identify participants.
  • Alternative sampling methods could include:
  • Recruiting participants immediately after service engagement
  • Distributing surveys through services
  • Providing online links or QR codes at the point of service delivery
  • A larger sample would likely be more diverse and reduce the risk of unintentionally identifying individual service users. However, the required sample size will depend on the scope of services and participants included.
  • Combining multiple waves of data collection could help maximise sample size.
  • Sample stratification was recommended to ensure representation across different areas, age groups, and genders.

Possible participants

TLB survey respondents were asked to select all the groups they thought should be given the opportunity to participate in a MHES survey. Two key groups of potential MHES survey respondents were identified. Four fifths (82%) felt that anyone with experience of using mental health services should be invited to participate in a MHES survey. A slightly smaller proportion (78%) thought anyone who has attempted to access mental health services and support should. Two fifths (40%) thought that anyone with a confirmed diagnosis should be included in the survey sample. Therefore, from the perspective of TLB survey respondents, there is a desire to capture a very wide range of experiences.

Figure 3. Q8: In your opinion, who should be given the opportunity to participate in an ‘Experience of Mental Health Services and Support’ survey? Base = 105.
Bar chart showing responses from 105 participants regarding their experience with mental health services and support. The highest percentage (82%) represents individuals who have experience using mental health services, followed closely by 78% who have attempted to access such services. 40% have a confirmed diagnosis. Smaller groups include 14% who are a selected sample of those with service experience, 10% who selected Other, and 5% who are a selected sample of those with a confirmed diagnosis.

Interviewees suggested several groups who could be surveyed; their views aligned with the survey respondent’s desire for a wide range of voices to be captured and also highlighted a few additional groups that could be included who were mentioned by the 10 TLB survey respondents who suggested ‘other’ audiences (see below).

While interviewees most commonly mentioned surveying people with poor mental health, there was little consensus on how this could be defined. Suggestions included surveying inpatients, service users, people with a mental health diagnosis, all people with poor mental health including those without a diagnosis, people being prescribed medication for mental health, and people who have accessed online support. Capturing the voices of people on waiting lists and those with rejected referrals was suggested by a small number of interviewees, to understand why they were unable to access services and whether support was received while waiting.

Challenges arising from including or excluding certain groups were described by interviewees. These included that:

  • A survey that includes anyone with poor mental health, regardless of diagnosis, severity, or access to services, will be of such a large scale that it will negatively impact the feasibility and resourcing of the survey and the ability to ask service-specific questions. Given the desire for a survey to understand and improve experience of mental health services, interviewees questioned whether there was a need for such a broadly defined sample.
  • Conversely, it was felt that it would be too extreme if only inpatients were included in the sample. It was noted that this approach would not enable a whole-system representative view, which may limit the usefulness of the data, as well as potentially generating ethical challenges.
  • If only people with diagnoses are included, the voices of those who have had challenges in getting a diagnosis would be excluded. One interviewee commented that diagnosis can be an arbitrary way of deciding what need is and to instead include anyone self-declaring poor mental health.
  • There were also suggestions to have an initial narrower focus to make the survey more manageable and smaller in scale. For example, focusing on a defined population of adult inpatients discharged from care, or including children and young people at a later date.

A few interviewees and TLB survey respondents also suggested surveying carers, family members and friends of people with poor mental health. They noted the importance of their support for the recovery and experience of people with poor mental health. The inclusion of children and young people within the survey was also raised, with a note to include their parents and carers in the survey if that was the case. Including staff from mental health services was also suggested by one survey respondent. Another suggested capturing experiences of those who support mental health services, such as ambulance services, GPs (General Practitioners), social workers and police. Looking specifically at the experiences of neurodivergent people was also suggested.

Examples from the evidence review: A range of service inclusion levels occurred across mental health-specific surveys, from community mental health services only, inpatient mental health services only, and both community and inpatient services (most covered both). For example, the South Dakota State Government Community Survey is a general population survey that asks about experiences of mental health services, while the Australian Institute of Health and Welfare’s Your Experience of Service (YES) survey focused on community and inpatient mental health service users across 86 services. Similarly, the State of Delaware’s Mental Health Statistics Improvement Program (MHSIP) Adult Consumer Survey covered both community and inpatient mental health service users, but the Arkansas Community Mental Health Centers and Clinics survey by Arkansas Medicaid Providers involved community mental health service users only. For more information, the bibliography in Appendix B provides links to each of the surveys in the evidence review.

Regardless of which groups are included in the sample, there was recognition of the need to ensure a range of voices within these groups is captured. This includes recording both good and bad experiences of services, as well as diversity in culture, language, ethnicity, age, disability and other protected characteristics.

Possible services to sample

Interviewees highlighted that mental health services are wide-ranging and suggested multiple options for which services to include in a MHES survey. Again, it was recognised that the greater the scale of the survey, the more resource-intensive and complex it becomes, despite the need for the survey to have a clear purpose and produce useful data. For example, including multiple services will likely require different, service-specific question sets.

Options for including different services comprised capturing:

  • Inpatient services only. While this would produce a clearly defined dataset, this approach could be seen as too narrowly defined and as not being representative of most people’s experiences of mental health and service use. It was noted that people in inpatient services or those with severe mental health conditions may be in distress. These point to ethical and broader considerations about the point at which a survey is disseminated to service users within a particular setting.
  • Experiences across all mental health supports. This could include all those who access mental health services, those who go to their GP for mental health support and those who access third-sector support. However, it was noted by one interviewee that there may be a lack of feasibility to look at the whole of Scotland.
  • Community-level data where informal mental health support is provided. This could include youth clubs and community interventions. However, capturing this information could be more challenging than in more structured clinical settings.
  • Data where the mental health standards have been implemented for secondary care and psychological therapies.
  • Data at the level of specific mental health interventions.
  • Data on the continuum of access to care – from GP services, community-based services, NHS services, inpatient services, general services, specialist services and third-sector services. It was also suggested that this could include police and crisis responses. This would enable an understanding of all service interactions, movement between journeys and overall journeys of support.

Interviewees acknowledged that there is no clear answer and greater consideration would need to be given to which services to include. It was suggested that more consultation be held on which services would bring the most value.

Examples from the evidence review: Sampling across the evidence was mainly based on the use of certain mental health services or other health services (such as maternity and cancer services). For example, CQC’s Community Mental Health Survey in England covered 53 NHS mental health trusts. A random sample of at least 1,250 people per trust who were 16 or above who had accessed services between 1 April and 31 May 2023, was invited to take part. The survey achieved a 20% response rate, resulting in an overall sample of 14,770.

Five surveys were distributed to all users of a particular service. For example, The British Columbia Provincial Government’s Mental Health and Substance Use Short-Say Inpatient Experiences Survey handed all service users a survey 24 hours before planned discharge if they consented to take part. The Alaska State Department of Health and Social Services’s 2016-2019 Annual Behavioural Health Consumer Survey was sent to all clients who had accessed at least one documented mental health and/or substance abuse service within the required time period.

Sampling strategy

Four interviewees commented on what information would be required to inform the sampling strategy. However, it was noted that this was challenging to answer without first knowing what the survey sample would be i.e. which people and services would be included. Ethical and privacy challenges were also raised.

Barriers to using NHS data to contact participants were also noted, with a suggestion that practitioners, GPs, and bereavement counsellors should mention the survey to service users rather than gathering contact details to develop a sample. It was stated that mental health status is classified as w ‘extra sensitive’ privacy health category within PHS datasets; therefore, access to this data would need Public Benefit and Privacy Panel (PBPP) approval. One interviewee suggested it may be easier to directly sample the population than to rely on other existing databases and systems.

Due to ethical and privacy challenges, interviewees noted it may be safer to identify a particular service or course of treatment and distribute a survey to that defined group once they complete treatment, rather than sourcing names and addresses to inform a sample.

Sample size

Interviewees suggested that a MHES survey should have a large enough sample to prevent unintentional identification of participants, which could occur with a small sample. If there is a small sample for specific services or health boards, such as for services in remote areas with small service user numbers, interviewees suggested reporting at a high level to prevent identification. Interviewees raised the need for a diverse sample, which would be more likely to come from a larger sample. While these suggestions were made, there was not a clear consensus on the sample size for a MHES survey across interviewees. One interviewee stated that if money were not an issue, the best approach would be a large sample or to sample everyone.

One interviewee noted that the number of people within the survey could be large depending on the scope, so they suggested stratifying the sample - having the same number of people by area, age, and gender. However, they noted that these may need to be reworked where there are small populations, such as island communities.

One interviewee also commented on mitigations for sampling errors, namely to have multiple quality checks. They noted having used quality checks at the survey setting when completed, by service providers and being checked prior to analysis that the data is representative.

Examples from the evidence review: Sample sizes and response rates varied widely across surveys. For example:

New Hampshire State Government Community Mental Health Centre Client Satisfaction Survey – sent to 1,648 adults with 44.5% response rate (663).

Department of Health Northern Ireland 2017 inpatient experience survey – sent to 18,575 with 37% response rate (6,868).

2023-24 Scottish Government HACE Survey – 526,758 sample, 107,538 responses (20% (56% online, 43% post, <1% by phone).

Oregon Health Authority MHSIP – 10,000-11,000 sample, 2,295 responses (18%).

Contact

Email: socialresearch@gov.scot

Back to top