Self-directed Support Implementation Study 2018: report 2

Presents the results of: an international literature review; an assessment of current data and other evidence in Scotland on self-directed support; material from case studies.


2. Literature review 

The purpose of the literature review was not to identify the conclusions of previous evaluations, but to identify methodological approaches and limitations from evaluations of similar policies, strategies and programmes to inform the options appraisal.

The key questions for the review were:

  • What quantitative indicators have been/are being collected?
  • What qualitative questions have been asked?
  • What quantitative and qualitative methods have been employed?
  • What challenges and limitations have occurred?
  • Have any key areas emerged that should be considered for a future evaluation of self-directed support?

Methods

Whilst the approach was not a formal systematic literature review, many of the principles of systematic reviewing were followed – notably for the searches. The methodology for the literature review is described in Appendix 1. Appendix 2 contains a list of the references extracted and Appendix 3 provides further details of the studies included in the review.

National evaluations of established programmes

Of the ten national evaluations of established programmes, nine were in England (1-8, 10) with all looking at individual budgets or direct payments covering their impact on all supported people or the following groups:

  • LGBTQI+ disabled people
  • People with dementia
  • Older people
  • Carers
  • Front line practitioners
  • Managers
  • Councils.

An Australian study (9) evaluated Individual Funding – a similar scheme to self-directed support or portable funding packages – and its impact on supported people, services and government agencies.

The methodologies for all evaluations used a mixture of surveys, interviews and routinely collected monitoring data. Data extracted from each study is shown in Tables A3.1 and A3.2 in Appendix 3.

Local evaluations of established national programmes

Two studies were extracted providing methodological evidence from local evaluations of established national programmes (11-12).

One study from Essex (11) looked at the impact over a three-year period of individual funding on the same group of carers, supported people, families and providers using a longitudinal methodology. A study from Illinois (12) evaluated the Home-Based Support Services Programme for Adults (HBSSP) using a one-off survey. Data extracted from each study is shown in Tables A3.3 and A3.4 in Appendix 3.

National evaluations of pilot programmes

Of the six evaluation reports/published studies extracted, five were in England covering pilots of individual budgets (13, 15-18) and their impact on older people and other adults, carers, disabled children and their families, and people in long term residential care, as well as services engaged with these people. One study from Germany (14) looked at the impact of personal budgets on people with long-term care insurance. Data extracted from each study is shown in Appendix 3 (Tables A3.5 and A3.6).

Local evaluations of pilot programmes

One study evaluated a pilot project of support, advocacy and brokerage in three local authorities in England (19). Data extracted from the study is shown in Tables A3.7 and A3.8 in Appendix 3.

Summary of issues from review for evaluability assessment

Economic evaluations

There was a paucity of studies that undertook full or partial evaluations of self-directed support-type policies or interventions with only six attempting to address any aspect of economic impact [2,14-17,19]. Of these six studies, only one was a national evaluation of an existing programme [2], four only dealt with estimating the costs of delivering individual budgets or direct payments [2,14,15,19] with the evaluation of the national existing programme [2] only providing a point estimate of costs and not considering change (as the programme was already in place no data could be gathered on costs of support before the programme). Collecting high quality cost information that is reliable was frequently raised as a challenge across evaluations whether collected from families or practitioners.

One study calculated the cost per satisfied family [17] (as judged by a survey) but the authors acknowledged this could not be compared to any other policy or intervention or even a ‘before’ scenario. As no a priori assumption had been made on what would be an acceptable cost per satisfied family the analysis was of no real use to evaluators or decision makers.

A further study calculated cost-effectiveness ratios based upon responses to the General Health Questionnaire (GHQ) and the Adult Social Care Outcomes Toolkit (ASCOT) [19]. This was undertaken as part of the Individual Budgets Pilot Programme which had a comparator cohort not receiving individual budgets.  As such the methodology is not replicable to self-directed support in Scotland which is already established nationwide.

The evidence from the literature review shows the difficulty in evaluating self-directed support economically, at least in terms of whether it is a better way of allocating resources in social care than other methods. What the literature highlights is that identifying the costs associated with delivering care with choice and control and individualised outcomes has either not been considered in previous evaluations or has proven challenging to the point that the evidence is not robust and meaningful. We were not able to identify any economic evaluation of existing national programmes or policies, which is understandable given it is difficult to isolate the change in resource use that has occurred once a programme has been established.

What is a ‘good conversation’? 

A key component of self-directed support is the good conversations that take place between social workers and supported people to ascertain what is the right option for delivery of social care to them. Whilst this can be in qualitative descriptions of the assessment process in pilot evaluations identified in the literature review, no attempt was made in any evaluation to try to understand quantitatively whether options had been fully explained by social workers, or qualitatively in national evaluations of established programmes. In any case, whether just explaining options would constitute a ‘good conversation’ which was enabling rather than directing choice is unclear. The literature review therefore did not provide evidence on how a good conversation should be evaluated or monitored or how the quality of assessment can be captured.

Quantitative vs qualitative evaluation and the role of a control group

Of the ten national evaluations of established policies or programmes identified only one - of Personal Health Budgets (9) - involves a form of ongoing national evaluation or monitoring (using a bi-annual survey of supported people). The remaining evaluations were ad hoc of specific groups of supported people or stakeholders, or focused on specific issues.

Where national data on supported people receiving personalised social care (which is similar to the Scottish approach) is captured routinely – as in Australia and England – the data has either only appeared in audits or ad hoc evaluations and in both cases has been criticised because of a lack of standardisation at a local authority or state level as to how personal budgets are recorded.

The methodological focus of nearly all national evaluations of established policies or programmes was qualitative. Outside of the POET data collection described below, where quantitative data was collected it was either through a targeted survey of the group of interest for the evaluation (such as Directors of Social Services) or from nationally available sources to address very specific questions (such as the percentage of people with personal budgets).  No national evaluation considered or attempted to construct any form of control group. The lack of a quantitative counterfactual necessarily limits the conclusions that can be drawn from quantitative data around impacts.  In any case, with national level existing policies programmes it is not clear how a control group could have been constructed. This may explain why evaluations of existing national level policies and programmes have focussed on qualitative research on specific issues or for specific groups.

Evaluation of high-level outcomes

Evaluations of both existing and pilot programmes have attempted to capture high level outcomes on patient wellbeing and satisfaction. In only one case – again the evaluation of Personal Health Budgets (9) –this is an ongoing evaluation using the bi-annual Personal Outcomes Evaluation Tool (POET) produced by In-Control.

In pilot evaluations, several other existing tools to measure outcomes in social care and health have been tried – such as the Adult Social Care Outcomes Tool (ASCOT), GHQ-12 or EQ5-D – as well as bespoke survey questions, usually including simple self-reported responses on a Likert scale.

The tool chosen to capture outcomes may be less important than the approach to administering the tool itself. Whilst census-type surveys of local authorities have been shown to have some success, in the case of supported people and services the results, in terms of return rates, have often been disappointingly low.  There is a danger in assuming a blanket approach to surveying will result in a meaningful sample size because even if the absolute numbers returned are large, the likelihood of self-selection bias in those completing the surveys is high. Several different approaches to survey distribution have been trialled in the published evaluations including telephone and online surveying. These would seem to suggest that purposeful sampling - whilst more expensive per completed form - is likely not only to result in higher response rates but also a more representative sample than a blanket survey.

Supported people with different characteristics

A key theme arising from evaluations was the different challenges faced by supported people and how these vary not only in relation to underlying need (such as dementia) but also in relation to characteristics such as sexuality. Similarly, the impact on carers can depend on the relationship the carer has with a supported person.  Any ongoing monitoring therefore needs to capture important characteristics of both supported people and carers to ensure that self-directed support is working for everyone and a focused evaluation may be required to understand how the characteristics of people and carers impacts on the choice and control they are able to exercise.

Need for a longitudinal approach

Several of the evaluations identified highlighted the need to be able to monitor change over time – not just at a population level but for individual supported people, their carers and families and service providers. This would enable evidence to be gathered of how the choice and control offered by initiatives such as self-directed support results in changes in long-term outcomes and how the ability to exercise choice and control adjusts over time as circumstances change.

Contact

Email: socialresearch@gov.scot

Back to top