TEC programme data review and evaluation: options study

This report presents the findings from the Technology Enabled Care (TEC) programme data review and evaluation option study.


5.0 Measurement Framework

The difficulties associated with assessing TEC against the principles and requirements of traditional evaluation methods were raised by workstream leads at the outset of this study. This was especially an issue for interventions that sit closer to the health end of the 'health and social care' spectrum, where there is an expectation that studies will be experimental in design. This has led to high-quality non-experimental studies being rejected by clinical commissioners. In general, the need for a better understanding of the difficulty in using experimental methods in TEC and for an acceptance of the value, and complementarity, of other approaches was identified.

To this end, we propose a new approach to measurement for the TEC programme as it moves forward into the next phase of work. This builds on a conceptual framework developed by Glasgow et al (2013) to overcome the challenges of using traditional research approaches in the field of eHealth (which closely relates to the TEC programme). This field of research is still in development, and as well as making a direct contribution to technology-enabled care in Scotland, it presents an opportunity to make wider methodological and practical contributions.

Technology-enabled care interventions seek to transform the way in which health and social care are being delivered and our starting point is that this differentiates it from some other areas of health and social care that are more amenable to traditional evaluation. Technology-enabled care interventions require timely data, flexible and adaptable methodologies and attention to context. The Rapid Relevant Research Process ( RRRP) is advocated by Glasgow et al. (2013) to respond to these requirements. It aims to be robust, pragmatic, relevant and more likely to produce results that will translate successfully into policy and practice than the traditional approach. The model is also deliberately non-linear, and emphasises an iterative, adaptive research process.

A further issue that differentiates technology-enabled care from other areas of health and social care is that it faces a set of unique challenges relating to implementation. These have been discussed in detail in the previous section. However, they are also relevant to evaluation. Whilst we know that there is considerable evidence for the efficacy of many of the interventions, there remains a need to fully understand the conditions under which they are effective, generalisable, sustainable, and cost-effective. Just Economics have proposed that Implementation Science ( IS) frameworks could support the development of strategies. Each IS framework includes as a central feature a specific approach to evaluation. The measurement requirements may differ of these frameworks, but they share the need for early, ongoing feedback to support continuous refinement of the implementation approach. This is again consistent with the RRRP approach to evaluation.

5.1 The Rapid Relevant Research Process

Table 3 compares the RRRP with the traditional research pipeline. The issues that RRRP seeks to address are very relevant for the TEC programme. Whilst this is a break with evidence-based medicine ( EBM), it is entirely consistent with approaches to evaluation that have long been used in social science, and related areas like social care. The approach is therefore not new but does provide a broad framework to direct and guide decisions on commissioning and conducting research. Furthermore, the approach has been developed in the health sector and its acceptability to a health audience is important if commissioners are to move beyond EBM approaches. The adoption of a broad framework will support a more consistent and strategic approach to evaluation is being used and that best practice elements such as stakeholder engagement are included.

Table 3: Comparison of traditional research pipeline with rapid, relevant research process

Issue

Traditional Pipeline

Rapid, relevant research process

Speed

Slow to very slow

Rapid, especially early on

Intervention and 'protocol' flexibility

'Frozen' early and standardised

Iterative, evolves, adaptive

Adaptation

Seen as bad – compromise to integrity

Encouraged and necessary to 'fit'

Designs Used

Predominantly RCT

Several, interactive, convergent

Costs and feasibility of research and products produced

Not considered, usually high

Central throughout, considered before, during and after, 'Minimum intervention needed for change' approach

Stakeholder engagement

Little and usually only respond to research ideas

Throughout, essential

Reporting

CONSORT criteria, primary outcomes, little else

Broad, transparent, perspective of adoptees

Role of context

De-emphasised, assumed independent of context

Context is central, critical and studied

Source: Glasgow et al. (2013)

We have expanded and developed the challenges presented in Table 3 into a set of principles to underpin the future approach to measurement in the TEC programme (see 5.3). These principles demonstrate how we envisage taking these concepts forward into practice. As much as possible, the principles draw on current approaches to measurement within the TEC programme, but formalise these so that they can be applied systematically across the programme.

In advocating for the RRRP approach, we are not suggesting that experimental studies, such as Randomised Control Trials ( RCTs), are never used to examine the efficacy of technology-enabled care. Instead, we are saying that such methods are unlikely to be the most appropriate for most evaluations undertaken by the TEC programme. This is especially the case as the programme moves to a greater emphasis on implementation, as set out in its strategic priorities for the next three years.

5.2 Measurement requirements emerging from the strategic priorities 2018-2021

The TEC programme has already achieved significant progress in many areas. As the programme moves into its next phase, there is still much to achieve, including ensuring that change takes place at scale, and becomes self-sustaining. The four key strategic priorities for the TEC programme from 2018-2021 are set out in Table 4, with the emerging measurement requirements highlighted for each, including how they map on to the RRRP approach.

Table 4: Measurement requirements for each strategic priority

Strategic Priority

Measurement requirement

Preparing for the future – identifying and testing new approaches that offer the potential to achieve change at scale

Rapid, robust and pragmatic evaluation and 'tests of change' that identify new approaches that could be used at scale. These should 'demonstrate measurable improvement in outcomes either directly to individuals or indirectly through improved service delivery processes'.
Flexible and adaptive methodology, which ensures that learning can continuously inform the ongoing design of the approach.
The methodology should also produce results which highlight key issues that will inform roll out at scale.

Developing approaches once for Scotland – developing approaches that have been shown to be effective, supporting scaling up across Scotland and addressing barriers that require national level action

Research methodology that:
focuses on understanding the impact of context and setting, highlights issues/results which are key for future adopters, such as barriers and enablers to successful implementation.
Supports and enables collaboration at all levels

Building capabilities and supporting improvement – championing, supporting, gathering and promoting the evidence of what works, to develop the culture and skills that recognise and use digital TEC including through developing business cases, supporting strategic planning and delivery.

Methodology that produces results which are easy to share, readily translatable into policy and practice, and include a rich mix of qualitative and quantitative data.

Transforming local systems – supporting exemplars that are seeking to transform local health and social care systems using digital technology to shift local systems upstream to prevention, self-management and greater independent living.

Rapid, robust, pragmatic small-scale evaluations of local exemplars demonstrating achievement of short and medium-term outcomes as set out in the logic models. A flexible and adaptive research design, so that learning can continuously inform the project's ongoing design and performance (including capturing learning from the early stages).

5.3 Measurement principles

The following nine principles have been developed from the RRRP framework and the evaluation requirements emerging from the strategic priorities for the next three years. Many of these principles are already being applied. We collect them here to ensure that they are adopted consistently in future measurement and evaluation.

Principle 1: Be strategic

Evaluations should add value and be cost-effective

Evaluation resources should be carefully deployed to ensure that they are addressing gaps. Where evidence already exists, even if it has been collected outside the TEC programme, evaluation should not seek to repeat this. There will be exceptions, for example where research took place in a different context, which may affect the results, but there should be a clear and specific rationale for doing so ( e.g. if it is intended to have policy influence). Although Scotland is at the forefront of technology-enabled care development and evaluation, it is a rapidly expanding area of research. The programme should keep abreast of the latest international research. Regular research digests could both inform staff and provide them with support in their day job ( e.g. where evidence demonstrates the efficacy of their approach).

Strategic decisions should also be guided by cost-effectiveness. Methods which are costly or time-intensive may not be the most appropriate, if information can be gathered by other means. To this end, the programme may want to consider bringing more evaluation skills in-house. A staff member with partial oversight of evaluation could support delivery staff in the field with their evaluation requirements as well as carry out some research tasks, such as reviews of the international literature and managing the collection of monitoring data. This could be combined with commissioning of external research in areas of specialist skill, such as systematic reviews, Implementation Science or economic analysis.

Principle 2: Plan and Scope

Each evaluation should be carefully planned and scoped

Once an area of evaluation is chosen, it will require careful planning to ensure a good research design and appropriate scope are chosen. Well-planned evaluations can still encounter problems and require refinements to design, scope or research questions. Good planning minimises the risk of this. We have developed the following checklist for use in the planning phase to ensure that an appropriate design and scope are chosen (some are developed further in the principles that follow):

1. Why is the evaluation being carried out? How will it support the achievement of the Strategic Priorities?
2. What are the specific research questions that need to be addressed?
3. What research method will be most appropriate to answer those research questions?
( Principle 4)
4. How quickly do the results need to be available? How does this inform the research design? ( Principle 5)
5. Will the findings provide new evidence or information, or is it repeating what has already been done? Can new variables be included? ( Principle 1)
6. Is the research method sufficiently iterative, adaptive and flexible? If not, is there a sound argument for choosing it? ( Principle 6)
7. What are the cost implications? How can the evaluation be run most cost-effectively? ( Principle 1)
8. Does context matter for the evaluation and if so, how should this be incorporated into the design? (See Principle 7)
9. Who are the key stakeholders and how will they be involved throughout? ( Principle 8)
10. For an outcomes evaluation, is it appropriate to incorporate questions regarding implementation, sustainability and economic impacts? ( Principle 1)
11. How can the process of data collection and automation be streamlined and efficient, whilst minimising potential for error? ( Principle 9)

Principle 3: Measure what matters

There are three aspects to this principle: (a) measure outcomes; (b) measure things relevant to people; and (c) ensure that indirect effects/externalities are captured.

Although the language of outcomes is now commonplace in the measurement of public and charitable services, the term is often used loosely to describe things that are outputs, indicators or process measures. Decisions about what constitutes an output, outcome, or indicator may vary in different situations. For example, something that is considered an output of one of the workstreams may be an outcome of the programme and vice-versa. In any logic model, the set of outcomes should work together to provide an overall picture of the impact the intervention is having. As a set they should be comprehensive, clear, incommensurable (be measuring different things) and irreducible (be referring to only one concept).

To ensure that you are measuring things that are relevant to your stakeholders, it is important to coproduce research with them. Coproduction in research is not a new idea and forms the basis of approaches such as Action Research. The idea is that stakeholders have greater control over the research process and are provided with opportunities to learn from their experience. It is more than simply a data collection exercise (see Principle 8).

A benefit of coproducing research with your stakeholders is that it ensures that all significant outcomes are being identified, including indirect or unintended ones. Where possible, evaluations start with an open question as to the outcomes, rather than focusing on a predetermined set out of measures. This holistic approach ensures that unexpected outcomes are considered in the research design.

Principle 4: Methodological Plurality

The most appropriate methodology/approach should be chosen from a range of options

There are many different evaluation methods and approaches. No single approach or method is appropriate to all situations, nor is any intrinsically better than another: they all have strengths and weaknesses and work more or less well in different contexts. For one-off evaluations – in contrast with systematic monitoring – employing a range of approaches across the programme gives richness to the data, and should provide flexibility, which as discussed is an important requirement in the evaluation of technology enabled care.

One key decision is whether to use qualitative, quantitative or mixed methods approaches. Ideally, the chosen approach should be the best fit for the research questions being asked and the evaluation's overall aims. However, this will need to be balanced with issues of cost, data availability and feasibility (see Principle 1). The need for pragmatism will sometimes mean that a 'second best' approach is chosen. Table 5 below sets out where qualitative or quantitative approaches may be most useful:

Table 5: Comparison of qualitative and quantitative methods

Qualitative methods help to:

Quantitative methods help to:

Develop or refine your research questions

Understand feelings, perceptions and attitudes

Capture the language used to describe services or products

Generate ideas for improvements

Understand context in depth

Interpret findings from quantitative data

Develop hypotheses which can then be tested using quantitative data

Test hypotheses or theories developed through qualitative research

Understand effects at scale of a service or product

Generalise results to a wider population

Find patterns or trends emerging from a large amount of data

Generate statistically significant results

It is often useful to begin with qualitative research, or stakeholder engagement, to refine the research questions, understand the context and generate specific hypotheses (such as a logic model or an outcomes framework) that are then tested through quantitative research. A final stage of qualitative research is sometimes used to help interpret and understand the findings from the quantitative research and support its dissemination. This approach is widely used and underpins designs like theory-based evaluation (Weiss, 2000), realistic evaluation design (Pawson and Tilley 2006), contribution analysis (Mayne, 2001) and Social Return on Investment (Nicholls et al. 2010). Similar to the RRRP approach, these emphasise context and its impact on the causal chain. However, such mixed method studies can be time consuming and sometimes costly, and any given study may focus purely on one or other approach, depending on the evaluation requirements, complexity and budget.

Principle 5: Timeliness

Evaluation findings should be available in a timely fashion

When planning an evaluation, the likely time lag for any methodology should be considered alongside the time scale within which the results are needed. As discussed, there is typically a considerable time lag involved when using experimental research designs such as RCTs. In the context of technology-enabled care, where the technologies under study are rapidly evolving, this delay may mean the findings are all but obsolete by the time of publication.

If data are required quickly, short cross-sectional, online surveys can be used, or interviews conducted by telephone, video conferencing or SMS. The development of survey templates, interview guides or other 'off the shelf' approaches may be helpful for managers who need data in a timely fashion and reduce duplication of effort. It may also be appropriate to construct a panel of TEC users that can be returned to regularly at short notice to test out ideas. This process could be furthered by the development of better, more systematic outcomes monitoring linked to the logic models for each workstream. Survey templates could be developed that address indicators at each link in the causal chain. These could then be utilised depending on the stage of the project life cycle. The use of templates generally will reduce the amount of time that delivery staff must spend on evaluation.

Principle 6: Flexibility

There should be a focus on evaluation methods, which are iterative, adaptive and flexible

As mentioned above, the rapid evolution of technology in the field of TEC has implications for evaluation. As well as providing rapid results, the evaluation method needs to support the process of continuous learning by providing feedback loops. For example, if there are early indications that an approach to implementing a new service is not working, the evaluation should (a) provide the data to demonstrate this in a timely fashion and (b) adapt to take account of changes that are required to overcome this. This may involve abandoning an evaluation at the point where something is found not to work or making a recommended change and then re-evaluating its impact. As such, the overall evaluation of technology-enabled care is likely to be iterative, rather than linear. The focus should be on ensuring that the information gathered through evaluation supports the achievement of the Strategic Priorities. The evaluation methods and approaches should be continually adapted as needed to fulfil this aim ( i.e. those agreed at the planning stage should not be rigidly adhered to if it later transpires that they are no longer the most appropriate method/approach/indicator).

Principle 7: Context matters

Context should be central, focused on and reported

A key difference between the RRRP and the traditional approach is the importance of context. It shares this with the theory-based approaches set out in Principle 4. Evaluation plans should demonstrate how they will ensure that context is considered, and evaluation findings should, wherever appropriate, include an understanding of how the context may have influenced the results. For example, in an intervention aimed at scaling up video conferencing, the evaluation needs to account for the digital skills of the population and infrastructure quality if it is to understand why the intervention was successful or otherwise. Unless considered at the planning stage, there is a risk that causal inferences will be misunderstood.

Context is especially important for implementation and demonstrates the need to incorporate implementation into the testing of the logic model. When developing a logic model, context may be explicitly identified or expressed as part of the discussion of the assumptions that underpin it.

Principle 8: Involve stakeholders and clients/citizens

Stakeholders should be involved throughout

In evaluation, 'stakeholder' refers to any group or entity that affects an intervention or is affected by it. The involvement of stakeholders at the planning stage ensures that the evaluation measures things that are most important to those directly experiencing the change and multi-stakeholder approaches are now commonplace in many types of evaluation. Not all stakeholders are of equal importance and the relevance of any stakeholder group to the analysis needs to meet a materiality test if they are to be included. This materiality test asks whether sufficient benefit is likely to have been created for that group, relative to the whole, to merit its inclusion in the analysis. The aim is to focus the logic model on the most significant outcomes, where omission would influence decision-making.

Engaging stakeholders generally begins with a mapping exercise to ensure the analysis is sufficiently holistic and that indirect effects are captured. Engaging stakeholders ensures that:

  • Context is well-understood
  • The logic model is relevant to all material stakeholders
  • Measurement tools are well-designed and meaningful to participants
  • Data on barriers and enablers can be gathered quickly and cost-effectively
  • Short-term recommendations for improvements can be made and acted on quickly

Stakeholder engagement is essential for smooth implementation. It also establishes a two-way communication channel between those initiating change and those affected by it. A further benefit is that it requires the use of participatory methods which can be empowering for staff and service users who may want to know their issues are being heard.

It is sometimes necessary to segment stakeholders to ensure that the logic model is coherent. This happens when materially different outcomes are identified for sub-groups of stakeholders, which will require distinct measurement. For example, different users of telecare will have a broadly similar logic model, but their outcomes may be sufficiently different (e.g, care home clients with dementia compared with a young person with a physical disability living alone) to merit distinct measurement.

Principle 9: Use technology

Data collection and analysis should be automated where possible

Data collection and analysis can be very time consuming, as well as susceptible to human error and bias. Using technology to support measurement reduces the risks of human error ( e.g. from data being inputted incorrectly) and can be less resource intensive. Moreover, TEC has a unique opportunity to incorporate data collection into the design of technologies so that data on outcomes, economic impacts and implementation is routinely gathered. In time, this could be facilitated and coordinated by the digital platform so that technologies can be programmed to produce real-time data for different aspects of the logic model. Although largely aspirational at this stage, it is important that the design and deployment of existing and future technologies makes full use of their data collection potential.

In the short term, there are several things that can be done to ensure technology supports evaluation. TEC Programme staff told us that they find collating monitoring data time consuming and that there are issues with how consistently people are recording data. There is potential here for the use of technological approaches, such as online systems for data gathering, a greater use of templates and guidance on how these are reported. Finally, much of the current monitoring data that is output-focused could be extended to include data on outcomes, economic impacts and implementation. However, this could only be achieved if the process is simpler and more streamlined. Practical ways that this could be achieved are addressed under the recommendations.

5.4 Recommendations for future measurement and evaluation

In this section, we provide practical recommendations of steps that could be taken to work towards the achievement of the nine principles set out above. Some are high level recommendations on the approach to evaluation – the 'how' to measure, and others are recommendations on evaluation content – the 'what' to measure. These are cross-cutting recommendations drawn from all the research activities carried out as part of this project. There is no hierarchy as such, but some will require a larger scale/resource level ( e.g. implementation) than others ( e.g. quality of life impacts).

1. Adopt the RRRP approach to evaluation

We recommend that the Scottish Government formally adopts this approach and uses the nine principles set out above to guide evaluators. In many instances, this is what the workstreams are already doing. Formalising the approach provides academic and methodological credibility for work that is already being undertaken. In addition, whilst it is generally consistent with existing practice, more could be done to strengthen elements like stakeholder engagement. As part of this we would recommend building economic analysis more routinely into evaluations. In a break with traditional cost benefit analysis, this would mean that economic studies would not be limited to cases where experimental data are available but would be considered with a wide range of study types and methodologies. Finally, it would be useful to document the process of adopting and developing the RRRP as this will increase its legitimacy as well as contribute to a process of ongoing learning and development.

2. Prioritise implementation and the impact of the TEC programme

Much of the evaluation to date has focused on the technologies at the workstream level, rather than the programme as a whole. In future evaluations, and as the programme matures, it will be helpful to shift the emphasis on to the impact of the programme, despite the continued existence of data gaps at the workstream level. This need was also highlighted in Hudson's evaluation of the first year of the programme (Hudson, 2016).

Given that the programme focuses largely on implementation, this will be facilitated by a greater focus on evaluating this area. From the exploratory research in this study, we recommended exploring the following evaluation options.

a) Scoping research on implementation needs and the effectiveness of existing strategies.

Implementation has already been addressed as part of evaluations to date ( e.g. Lennon et al. 2017 to identify success factors), including other work in Scotland and prior to the TEC programme commencing. Indeed, the programme was initially based on a review of implementation in telehealth and telecare. Nonetheless, this has been identified as an area that requires ongoing investigation with a view to identifying the most significant barriers (and new barriers as they emerge) and crucially the most useful implementation strategies to address them as the programme shifts to national scale up.

b) Identification of an IS framework for future TEC rollouts.

Implementation Science is best applied proactively at the planning stage, rather than retrospectively. There are about 60 IS frameworks in existence, but none specifically for TEC. Some frameworks, like Re-Aim, have been used by other researchers in the ehealth (including telehealth) field (Glasgow et al., 2013). It is recommended that the programme use, or adapt, an existing IS framework in future project planning and evaluation that can support both national and local implementation and contribute to the implementation of the new Digital Health and Care Strategy.

Further research is required to determine which framework is best suited to TEC. Questions that would need to be addressed are whether a) one would be comprehensive enough for the whole programme at both national and local levels, or whether separate ones are required for specific workstreams and b) whether an existing framework would be sufficient or whether a bespoke approach is required.

c) Evaluating the success of the implementation framework.

Should an implementation framework be selected, it will be important to evaluate its effectiveness. This could involve routine examination of service audit information, quality monitoring, feedback from frontline staff and service users and regular mixed method reviews of progress. This would be highly beneficial for the programme and the future development of TEC and wider digital health and care priorities in Scotland. It would also be of wider academic, policy and practice interest, and contribute further to innovation in technology-enabled care.

3. Conduct a review of monitoring data capture and use evaluation to address gaps in the logic models

The way in which monitoring data is captured should be reviewed. Alongside this, a long-term plan for automating data collection should be developed as well as a short/medium-term plan for the interim. This would seek to address the inconsistent and time-consuming nature of existing data collection and explore how to better support this. We recommend increasing the use of technological solutions where possible. Assuming a better system can be developed, this should seek to extend what is being captured to include outcomes, economic, and implementation data.

Evidence matrices have been developed for each of the workstreams and the overall programme, which compare the existing evidence base with the outcomes identified in the logic models. This is a systematic way of identifying gaps in the evidence and could be used to guide decisions on future evaluations.

4. Adopt a more consistent approach to evaluation, especially economic evaluation

In general, the programme could benefit from more consistency in how evaluations are approached. This would enable comparison but also improve the quality of evaluations by ensuring that good indicators and approaches to data collection are shared across projects. However, consistency is especially important for economic evaluations. This could be improved by adopting a standardised approach to the measurement of outcomes, wherever possible. [7] This could include the development of a bank of indicators and values that enable measurement and valuation to be undertaken more easily for multiple stakeholders and in the same way across different studies. This will make it easier to aggregate and compare results, or at a minimum read across different studies.

The logic model work undertaken for the interim report provides a strong foundation for developing a common indicator bank. This could be based on guidance such as the GDS Digital Inclusion Evaluation Toolkit developed by Just Economics. Such an approach could be tailored for economic evaluation with more sophisticated modules where a higher standard of evidence is required. Consistency around values is also important. Several studies used the ISD costings ( http://www.isdscotland.org/Health-topics/Finance/Costs/) and these could be incorporated into any guidance that is developed along with the costings on social care for England by the PSSRU ( http://www.pssru.ac.uk/project-pages/unit-costs/).

5. Develop guidance, in-house skills and a microsite

To improve consistency and ensure high quality approaches are used, the programme would benefit from developing bespoke guidance for the TEC programme. This would include templates for evaluation, surveys, interviews and guidance on economic analysis and other 'off the shelf approaches' that are consistent with the RRRP.

This also applies to economic analysis. In principle, studies currently follow Treasury Green Book guidance. However, this is sometimes patchy ( e.g. the use of discount rates). In addition, programme-specific guidance could be more tailored and contextual. This would improve the quality of analyses, for example by recommending the most appropriate discount rate or benefit period for a given context. Areas where economic guidance would be useful include:

  • Identifying intervention costs
  • Establishing additionality (especially for 'softer' outcomes)
  • Unit vs marginal costs
  • Discount rates and sensitivity analysis

To host these materials, we recommend the development of a microsite specific to the TEC programme. This would act as a 'one stop' resource hub for technology enabled care evaluations. To manage this site and the guidance development, we recommend fostering some in-house evaluation skills within the TEC programme ( e.g. a part-time staff role). This person could also support evaluations of delivery staff. Finally, they could be responsible for keeping abreast of the latest international research and disseminating this internally.

6. Conduct more multi-stakeholder research

Evaluations to date have tended to focus on a single stakeholder, even in contexts where there are multi-stakeholder impacts. We recommend taking a wider approach, which is again consistent with the RRRP. This is especially important for economic evaluations to ensure that benefits are being measured holistically. In our review, only one in ten studies adhered to this even though it is considered a best practice approach (Treasury, 2003). This is a further area where bespoke guidance may be beneficial.

Two stakeholder groups that would merit a greater focus in research are paid staff and unpaid carers. Although qualitative research exists in some of the workstreams, the evidence base could be strengthened. As discussed, staff resistance was one of the most cited barriers to successful implementation by research participants. There is also scope for research at the programme level given the fact that workforce issues are common to all workstreams. For example, the work on implementation could have a special focus on determinants of staff resistance and the use of strategies to overcome them, especially for workstreams that are at an earlier stage in the implementation cycle. Carers are important for outcomes and process evaluations as they benefit from the programme but are also essential to successful implementation. Evaluations could be scoped to measure carer impacts more routinely. Key metrics would relate to carer feelings of support in their caring role, how burdened they feel and their own quality of life.

7. Further research on future benefits, sustainability and mainstreaming

There are two types of sustainability relevant here: sustainability of service models using technology-enabled care and sustainability of the outcomes ( i.e. the benefit period for outcomes). The first is concerned with whether people cease to use a technology over the time and the second is concerned with whether the technology use continues to provide them with the initial benefit. In economic studies, we would describe this as the 'benefit period'. Due to the early-stage nature of the programme, much of the evidence is still short-term and it is difficult to answer these questions with existing data. The exception is telecare, which is building on many years of previous work. Although not a major challenge, there is some evidence of users discontinuing with a technology, which merits further research. There may be scope for tracking users over time to understand better the duration of outcomes and recidivism, or the reasons that people cease to use a technology enabled care intervention. This will be important for economic studies that want to project benefits into the future or understand the rate at which benefit declines over time.

An additional future impact is to understand the point at which a technology becomes mainstream. This would include some exploration of what constitutes 'critical mass' for each technology. This links to questions of when a technology to have been fully implemented and there is no further need for a TEC programme. This may also help identify the most promising set of activities to achieve mainstreaming.

8. More research on some secondary outcomes

Unsurprisingly, the evaluations have tended to focus on the primary outcome of a technology, often a clinical or care outcome. However, there are some gaps relating to secondary outcomes, which could benefit from some further research. The two we highlight here are quality of life/well-being impacts and health inequalities.

Improving the well-being of clients is central to the TEC programme, yet in some instances there is a more limited understanding of the impact of using technology-enabled care on quality of life. This was also highlighted as a need by Hudson's Year 1 evaluation (2016). There are exceptions with some areas such as telecare. However, there may be scope for building these impacts into evaluations more routinely, as well as exploring the most appropriate types of quality of life measures to use for research with TEC beneficiaries. This is an area that workstream leads have identified as requiring further support, drawing on best practice from the science of well-being measurement.

Measuring health inequalities is challenging but it is important both because of the potential for technology-enabled care to address health inequalities, and the risks that it will reproduce some of the inequalities associated with digital exclusion. We recommend the programme undertakes research to examine how it can ensure equality of access to technologies. For example, such a study might consider what steps might be taken to mitigate such inequalities being exacerbated through systems like digital platforms.

Contact

Back to top