3.0 Review of economic evaluation data
This section presents the output from the review of the evaluations containing an economic component. We begin with an overview of the data received before discussing the findings and recommendations. As much of the material used in the reports discussed here has already been covered under the workstream discussions, this section focuses more on the methodological issues arising from this analysis. Further information on the data review methodology is available in Appendix 1.
Ten reports were reviewed that contained economic data (see Appendix 1). The approaches to economic evaluation varied across the reports. The two areas which have seen the greatest levels of funding and are furthest along with implementation - HMHM and telecare – have the strongest evidence base around economic return ( e.g. the United4Health studies in HMHM and the evaluations by York Economics of telecare use for dementia patients in Renfrewshire and the longitudinal study on the Telecare Development Programme 2006 - 2011). There was only one study – the Living it Up evaluation in the digital platforms workstream – that adopted a multi-stakeholder approach to economic evaluation, with the remainder concerned solely with costs and benefits accruing to public bodies.
All ten studies found a positive return on investment. In one longitudinal study, the estimated gross value of telecare funded efficiencies over the period 2006-11 had a present value of approximately £78.6 million.  The methodological variations across the studies make it difficult to compare returns, even where they are looking at the same technology. It is our assessment that the studies likely underestimate the value of technology-enabled care as most placed an economic value on a narrow set of outcomes (see 3.2.2 below) and considered value creation solely to the government/health and organisations ( i.e. quality of life improvements were generally not valued). A second observation, which is discussed further below, is the difference in approaches required for health interventions and for social care interventions. The former, most notably HMHM, tend to have more experimental research designs underpinning them. This recognises the fact that, as clinical interventions, they usually adhere to the standards of evidence-based medicine. This is in many cases not practical, useful, or appropriate for social care interventions. Section 5 sets out a framework for robust, yet responsive and timely, research around technology enabled care.
Box 1: Forecasted Economic Studies
Forecasted economic studies, such as cost-benefit analysis, predict the likely economic benefit of an investment. They differ from economic evaluations in that, rather than using actual data on outcomes, assumptions are used to estimate what is likely to take place in a given scenario(s). Forecasted studies can be useful before the start of a project or intervention to decide whether an investment is likely to represent value for money and are often central to constructing a business case for an investment. They have particular value in relation to interventions, such as technology-enabled care, where the main benefits are only observed at scale and so may not be captured in pilot project evaluations. As forecasts, rather than evaluations, they present different methodological issues and have not been included in the review of economic evaluations in this section.
The TEC programme has commissioned several high-quality forecasted economic analyses. One of these is a feasibility study by Deloittes for the provision of universal telecare services in Scotland. This modelled the costs and benefits of telecare provision for a range of uptake levels and care packages. This study estimates that, at the current level of provision for those aged 75+ (20% receiving telecare), the Scottish public-sector yields £99m in benefits (though the majority are not cash-releasing) in return for a £39m spend by local authorities. They further estimate that increasing uptake to 34% among those aged 75+ would cost an extra £33m to £38m and generate additional (non-cash releasing) annual benefits of between £85m and £102m. PA Consulting undertook a similar forecasted analysis to estimate the value of mainstreaming telecare provision in Glasgow. They estimate that transformation and mainstreaming of the service, which currently costs Glasgow Council £2m per year, could yield net in-year benefits of £3.2m. Such information can be helpful in making the case for further investment in mainstreaming provision.
3.2 Economic evaluation data review findings
3.2.1 Intervention Costs
Intervention costs are quantified differently across the ten studies. Several studies included only the cost of the equipment and its installation, while others included administrative costs and other overheads. Some studies included development costs where a project was new, while others used only the ongoing running costs. The most robust studies examined how the technological innovation under study impacts on care service use more generally and included this in the intervention cost (although this could also be captured in the quantification of outcomes).
Most studies placed an economic value on a narrow set of outcomes, and usually only considered the value accruing to public bodies from these (see 3.2.3 for further discussion of this). In some cases, only a single outcome was valued. For instance, the Western Isles video conferencing stroke clinic evaluation attached an economic value only to the number of bed days saved. Several others considered two or three outcomes. The Living it Up SROI evaluation was the only study to value a comprehensive range of outcomes to multiple stakeholders. Whilst it is understandable why studies would have sought to demonstrate a business case, adopting a narrow frame is likely to miss important sources of value.
Very few studies consider the value of both positive and negative consequences. For instance, some technology-enabled care applications may in fact increase the need for certain forms of care, particularly in the short term. This was considered in a minority of studies ( e.g. York Economics study considered the additional care costs that come with enabling dementia patients to remain living in the community for longer).
One study – the Living it Up evaluation in the digital platform workstream – calculates the value to multiple stakeholders. The remainder attach economic value only to outcomes that relate to the public sector, and usually only to changes in use of health and care services ( e.g. visits to GP, avoided hospital or care home admissions). With several the studies pointing to quality of life and well-being improvements for recipients of the intervention – and this being one of the main aims of the TEC programme – adopting a multi-stakeholder approach to economic valuation would capture value more holistically and improve the quality of decision-making.
Valuing benefits to stakeholders other than the public sector is sometimes daunting. However, there are increasingly robust ways of undertaking such valuations. The key for TEC, if it is to endorse multi-stakeholder valuation, will be providing clear guidance to make this easier for evaluators and to ensure consistent approaches are adopted across studies, wherever possible.
In assessing the value of the interventions to the public sector, most studies are focused on changes in the use of health and care services and use unit costs ( e.g. the cost of a day in hospital is used to value the number of reduced hospital days) to value these. Using unit costs in this way is useful if we are interested in how much resource can be reallocated. If there is an interest in the extent to which cashable savings are generated, TEC may want to investigate marginal costs. Note that over time, as a greater number of services are displaced by TEC, the value of changes at the margin will increase making them closer to unit costs and cash-releasing savings.
3.2.4 Study design and additionality
The validity and robustness of the economic evaluations depends in large part on the design of the evaluation study. The findings of the more general data review are relevant here. It is worth drawing attention to how the studies established the added value of the intervention ( i.e. the change above and beyond what would have happened anyway). As noted earlier, experimental study designs are often not practical, necessary or appropriate when evaluating social care interventions. However, other methods of obtaining a benchmark or counterfactual may be required if the studies are to make robust claims about the value that has been created. This might involve controlling for national or regional trends or getting subjective assessments of attribution. In the ten reports reviewed for this section, several used a matched control sample ( e.g. United4Health studies on diabetes, COPD, CHF). Others used data from prior time periods as a comparison ( e.g. Falkirk Falls study), which can be a useful method of obtaining a counterfactual in the absence of having a control group. There are risks with studies that rely solely on reports by staff of key outcomes, such as avoided hospital admissions or reductions in length of stay ( e.g. video conferencing trial for psychiatry services in care homes). While reports by staff are useful, they should wherever possible be triangulated with other data ( e.g. historical comparisons; comparisons with other areas where a similar intervention has not been implemented). A challenge here is to separate out additional benefits of the programme from the workstreams/technologies. This would be a useful inclusion as part of future evaluations.
3.2.5 Discounting and sensitivity analysis
The use of a Social Discount Rate ( SDR) is common practice in social cost benefit analysis and is recommended in treasury guidance (Treasury, 2003). Applying an SDR reduces future benefits by a certain percentage to express them in their present value. Discounting recognises that people generally prefer benefits today than having to wait for them in the future, and it compensates the current generation of taxpayers for this 'patience'. In addition, it seeks to address intragenerational inequality as it assumes prosperity will rise, and that future generations will be better off and better able to make investments than we are today. SDRs can play a significant role in influencing investment decisions. Most of the studies reviewed here looked at interventions spanning several years, yet a majority did not apply a discount rate.
Sensitivity analysis is a process whereby key assumptions are varied to test how this effects the economic return. This is especially important for studies based on any weak assumptions. Six of the studies reviewed here undertook some sensitivity analysis.
Recommendations for ways to improve economic reporting are provided in Section 5.
There is a problem
Thanks for your feedback