We are testing a new beta website for gov.scot go to new site

Evaluation of Homelessness Prevention Innovation Fund Projects

Listen

CHAPTER 5: LESSONS FOR THE EVALUATION OF HOMELESSNESS PREVENTION AND PROGRAMME DESIGN

5.1 This Chapter consider lessons about monitoring and evaluation practice drawing on the experience of the eight HPIF projects. There are a number of lessons for funders of future programmes or projects who expect evaluation evidence to be provided, both national and local government. Many of the apparent lessons about evaluation practice are actually lessons of programme design suggesting that evaluation needs to become a greater priority amongst senior managers and staff concerned with homelessness strategy and implementation.

5.2 As part of this research an Exchange Event was held in October 2007 to bring all the projects together. This is described in section 1.15. There were a number of discussions about evaluation practice which generated interim good practice guidelines; the event illustrated that projects were well aware of the potential value of monitoring and evaluation but that difficulties arose in implementation and very few projects gave priority to addressing these issues in practice. This Chapter draws both on the Event and on the experience of the researchers working directly with the HPIF projects.

5.3 Chapter Two outlined many of the challenges of monitoring and evaluation of homelessness prevention in both conceptual and methodological terms. This research has illustrated how formidable those challenges are given the extent of resistance to evaluation and the lack of robust monitoring and evaluation plans amongst the projects. Over a year after the last of the projects was originally due to complete, not all the HPIF projects have been fully implemented with a knock on effect on evaluation. Whilst some projects have a completed evaluation report, a few barely have any monitoring data. As a result there remain gaps in the evidence about their impact on homelessness prevention.

Lessons of evaluation practice for funders and commissioners

5.4 Lessons of evaluation practice for funders and commissioners are not confined to homelessness prevention projects, but reflect the broader cultural understanding and attitudes to evaluation. They are very much a part of the way that a whole funding programme is designed and delivered.

Be clear about expectations

5.5 There are issues about how different parties view initiatives such as the HPIF with these different perspectives probably reflecting the realities of the funding climate. From one perspective, the HPIF appeared to be a programme of disparate but related projects that would test out new approaches to homelessness prevention. From another, it was viewed as a 'soft pot' of funding that would provide the necessary resources to undertake prevention work in their area. Amongst commissioners, there was little critical questioning of what was genuinely innovative, a retread of similar initiatives, or of the basis of the evidence cited in claims about previously successful projects. Likewise, some of those in receipt of funding appear to have given little thought to what it might actually mean to participate in a 'pilot' in terms of the demands to provide evidence of impact. The term pilot is widely used, but rarely scrutinised.

Allow time to develop robust proposals

5.6 Commissioners should ensure that they structure the bidding or proposal process to ensure that prospective bidders both have sufficient time to develop robust proposals but also so that they do not spend a lot of time developing proposals that have very little chance of attracting funding. A two stage process would allow more time for scrutiny of initial ideas and potentially allow for more innovative or challenging proposals to be developed, perhaps involving joint proposals from projects that undertake related work but do not normally work together. Practitioners need time and encouragement to work up good ideas, broker new partnerships and generate deliverable proposals that address the need for a shift in organisational culture and ways of working.

Recognise a variety of sources of evidence to inform practice

5.7 Proposals are likely to be based on a mix of existing evidence about previous or similar initiatives or work with a particular client group; service user views and local knowledge about needs and what is likely to work in a local context; and professional judgement based on established practice and emerging ideas or promising practices. Figure 5.1 illustrates the blending of these sources of evidence to inform practice and the relationship with on-going evaluation.

Figure 5.1: Evidence-informed practice - blending different sources

Figure 5.1: Evidence-informed practice - blending different sources

5.8 This recognises that research-based evaluation evidence is only one form of valid knowledge. Local sources of intelligence and professional judgements also have a role to play in using a blend of evidence to inform proposals. It also recognises that what might be considered to be good practice in one context may not transfer to another and that the process of local implementation would be strengthened by the adoption of a more formative approach to evaluation to inform the development of practice as it develops, in a more iterative, reflective and engaging way. It also implies a stronger role for self-evaluation in both the sense of 'internal' rather than external evaluation (which can often 'outsource' much of the learning) and also of greater evaluation by the projects of their own practice as part of a process of reflective learning and continuous improvement.

Prioritise evaluation and provide support early on

5.9 A fair proportion of the time of this project was focused on getting the projects to take evaluation seriously and to establish sound basic monitoring systems. In some cases, it has been difficult to get the HPIF projects to engage with this at all. Evaluation can be experienced as an oppressive, audit process. Whilst this was not the approach adopted here, inevitably there was defensiveness about an externally imposed requirement. Some projects were quite frank about their difficulties in complying with the requirements of the funders, within their existing resources. In other cases, projects were cooperative and positive.

5.10 All the evaluations that have been completed have been summative, although this was more by necessity than design. In a few cases, the bulk of the evaluation work has been undertaken by the researchers in order to ensure that service user views were captured, that wider partners were included in the process or more simply that the evidence was written up at all.

5.11 Evaluations are often viewed by practitioners as not particularly useful. This is a Catch-22: if practice is to change evaluation approaches have to be built in from the very start of projects or programmes. This comment was made at the Exchange Event:

"Far too many projects have these sort of one off evaluations that generate 40, 50, 100 page reports, which don't really tell you very much. They usually tell you what went wrong with the project - and which boxes have you ticked. They are costly and they do not serve the sector well. I very much feel that evaluation should be much more of a process - something that's built in right from the very start. It should have a sort of light touch. It should focus on outcomes, which service users and staff have a role in defining. …. it should be something that's ongoing and not a big bang thing that is bolted on at the end - and then yields a report which nobody reads."

5.12 Evaluation need not be viewed as a threat if it is approached in a more positive way as a built-in way of getting feedback, improving practice and ultimately outcomes for service users. There are lessons to be considered about how expectations are conveyed by funders, timing of commissioning of this kind of overview evaluation and how evaluation itself cannot be really innovative or even useful unless very clear signals and advice are provided from the very start of a programme.

Expect an outcome focus in project proposals

5.13 A key insight from this research is that a theory of change approach is valuable in encouraging a greater focus on outcomes, rather than outputs. However, it greatest value could be at the project design stage. In this research, the attempt to encourage projects to articulate the links between apparently worthy activities and the prevention of homelessness often met with resistance, even though funding had been approved. This approach did yield some useful insights as a practical tool to help design a monitoring and evaluation framework and these are discussed in section 5.22 below.

5.14 Funders should consider a requirement that proposals do spell out their assumptions about typical pathways or links between certain activities, behaviours and the risk of homelessness (or whatever other outcome is desired). This should include a timeline. A clear signal should be given to encourage greater realism in claims for intended outcomes and perhaps greater creativity and genuine innovation.

Make the monitoring and evaluation framework meaningful

5.15 If the monitoring and evaluation framework is to be meaningful and useful, it will be important not to prescribe the detailed indicators in advance. Agreement on which indicators will be used to monitor project outputs and outcomes should be developed in partnership once outcomes have been agreed. This can be a practical result of a theory of change approach. Otherwise, there is a tendency for projects to either propose or accept a series of indicators based on those most readily available, those that have always been collected or apparently 'objective' measures that actually do not reflect the aims of the project. A participant at the Exchange Event summed up one of the discussions:

"the kind of key thing that we all hate was that sometimes the actual evaluation framework can inhibit the real value of what it is you're doing - and you shouldn't really have a preconceived idea possibly of what it is you want to measure - because things change and you need to keep up with that."

5.16 Adopting this approach can lead to a lighter burden of data collection and can help to ensure that the data collected is meaningful for both clients and staff. This was a big concern of the participants at the Exchange Event. The CAB Rent Arrears Project struggled for several months to provide extensive client monitoring data that was of very little use in assessing the outcomes of the project and required considerable administrative time and effort. They found it difficult to challenge these requirements simply because this was what had been agreed at the beginning of the project. There is obviously a balance to be struck here: funders will have needs for certain common performance information but could assist greatly by being much more robust in their decisions about what information is essential for these purposes and what could actually be dispensed with.

Be proportionate in expectations and identify funding for evaluation

5.17 Some of the resistance to evaluation that was encountered amongst the HPIF projects was because a number of them had received relatively small amounts of funding for project delivery. Thus the demands for evaluation were experienced as disproportionate and unreasonable. Not all the original project proposals identified a separate cost for the evaluation element; both a reflection and cause of the low priority given to it.

5.18 However, if a project is on a small scale it is still important to evaluate. This research has demonstrated that the most valuable and effective interventions are not necessarily those that cost the most. This does underline the need to be explicit about expectations and to adopt a more hard headed and thoughtful approach to the collection of monitoring data so that projects are not overburdened and can see the potential value in evaluation.

Lessons for projects and evaluation methodology

5.19 There are a number of lessons for projects and for evaluation methodology and this section summarises and discusses these. Fuller guidance is available in the revised Good Practice Guidance in Annex 1.

Evaluate in partnership - agree what success would look like

5.20 Amongst the challenges of evaluating the prevention of homelessness is the fact that it may be impossible to prove that a specific project intervention has been responsible for preventing homelessness. Definitive attribution of outcomes to specific interventions is probably an unattainable goal; indeed 'proof' of this nature may not be the point. Many homelessness prevention projects are based on multi-agency working in recognition that prevention of homelessness does not happen in isolation from other interventions. However, it is less common for all partners to be included in early discussions about how to evaluate the work. A multi-agency approach to evaluation will help people to recognise the complexity of the context in which projects are working and enable better understanding of the shared contributions of a variety of partners to the ultimate outcomes.

Measure and map outcomes

5.21 Many of the HPIF projects were aiming to build resilience to crisis, prevent crises or reduce the chaotic nature of homelessness. Indeed, some saw a crisis as a potential trigger for personal change and therefore viewed it more positively. These varied objectives mean that notionally 'objective' direct measures of success in terms of a reduction in homelessness presentations or evictions prevented are not always appropriate.

Figure 5.2 Safe as Houses: project graphic of theory of change 21

Figure 5.2 Safe as Houses: project graphic of theory of change

5.22 Producing an outcome map may be a useful way to show that there are primary and secondary outcomes and show the distinctions between outputs and outcomes. The Safe as Houses project used this kind of 'theory of change' approach to show typical pathways or links between activities, behaviours and the risk of homelessness or other outcomes for clients. This process was initiated at an early stage with the project coordinator and manager and helped the project staff to think about appropriate output and outcome measures for the project. The 'pathways' through the project were mapped as a graphic or 'outcome map' which is shown in Figure 5.2 above.

5.23 Whilst the Safe as Houses graphic was on display and elicited questions about the evaluation, the value of the process was largely in the conversation that it produced about the scope and scale of the work of the project. It was quickly evident that the project was actually about much more than the provision of safety measures; advice and support was a strong element of their approach and there were softer outcomes that are inherently difficult to track and to attribute to the impact of the project. The project referred to these as the 'peace of mind' factor.

5.24 All of the HPIF projects had similar 'soft' outcomes of some kind. These were the more intangible aspects of their work such as the development of social skills, confidence, motivation, health and personal awareness and the ability to exercise good judgements. The discussions at the Exchange Event illustrated that the projects saw these outcomes as no less valid than the 'hard' more tangible counterparts. Indeed many of the precautionary interventions were based on the experience that a failure to achieve both hard and soft outcomes could undermine the achievement of the more tangible outcomes such as maintaining the person in their existing home.

Be formative, flexible and appreciative

5.25 Some HPIF projects had undertaken informal review sessions as they have proceeded. This comment from the Exchange Event suggests that this kind of practice tends to be seen as being outside the formal monitoring and evaluation process:

"We've got a common evaluation framework …equally there has been reflective practice ongoing so when we have hit glitches we have all sat down and thrashed it out….which is outside the evaluation framework - it's just purely about 'we've got a problem what do we do here?' And I think that way is fairly common to most projects - a degree of reflective practice."

5.26 These kinds of practices have not tended to be systematic or fully recorded. They are often reactive to problems. However, there have been insights from this informal approach and some projects have been able to adjust their delivery approach. This informal approach is a valid and valuable part of the evaluation process. It should become a legitimate on-going, yet light touch process that only collects data that is meaningful and which is useful for action. One example would be the ability to alter referral processes midstream. As one participant at the Exchange Event said:

" I think in another sphere that would just be referred to as a continuous improvement and learning approach and I think we ought to be doing that."

5.27 An element of this more formative process is recognition that whilst the evaluation process is likely to be more positive and useful if it is well planned in advance, this pre-planning needs to be balanced against an ability to remain attuned to the unexpected and the positives through recognition and capture of the incidents, stories or other accounts from clients, staff and partners which illustrate how things are working. This will help to develop a fuller understanding of other data and also pick up on unanticipated outcomes or spin offs.

5.28 Being positive or appreciative by asking about 'what's working well?' is a good starting point for evaluation. However, there is a tendency to overlook or discount the positives acknowledged at the Exchange Event:

"One person was talking an issues log which they keep and she realised [just now] that she'd forgotten to put anything positive in it! So that when the evaluators came in, there will be just a long list of problems. I think that partly stems from not having sufficiently kind of well worked out systems at the start and a need to make sure that the positives are in there as well as negatives".

Review administrative recording systems

5.30 If monitoring and evaluation frameworks are to support the way that interventions work and provide data on outcomes, data recoding systems have to be amended to support that process. For example, the Safe as Houses project had at least three separate recoding systems in operation to provide client data, data on ineligible referrals and corporate performance outcome monitoring data. This is not uncommon and it will clearly be a major challenge to harmonise such systems and to agree an approach that does not overburden individual projects with data collection requirements, complements existing reporting systems and which acknowledges any limitations of those systems.

5.31 The City of Edinburgh Council on-line ECCO system (Edinburgh Common Client Outcomes, formerly known as ECHO) is designed to record a range of issues including 'softer' outcomes and through which it should be possible to measure change over time. The system is used by a range of agencies across the City. However, this illustrated what is likely to be a common issue arising from similar systems that unless the client is still in contact with services, it will be necessary to proactively go back to clients to record their circumstances and assess outcomes for evaluation purposes. In the case of the SAH project, permission was sought by the project coordinator from women who had used the project to allow telephone interviews to take place. These were conducted by the external evaluator as a separate exercise and provided qualitative data of outcomes for women and their families that would otherwise not have been available.

Think about the practicalities of information collection

5.32 Some of the HPIF projects were working with a fixed and known client group on a 'casework' basis for a fixed and relatively short period of time. In these circumstances it may be feasible to collect some basic information in order to establish a baseline. However, projects may judge that the formalities of form-filling at a first session are likely to be a strong deterrent and in practice, this is not usually a priority. This can be a question of attitude: the Clackmannanshire WISH successor project have made the requirement to provide certain information for monitoring and evaluation purposes a much stronger and explicit part of the 'offer' to their members.

5.33 Preventative interventions inevitably work with a vulnerable and sometimes chaotic group of people and this may make it particularly difficult to collect information at the 'exit' point since this will often not be a planned event. For example, the WISH project had a set of baseline and exit questionnaires designed to assess outcomes for the participants. In practice, very few full sets of questionnaires were completed and as a result there is no evidence of outcomes for the women. More qualitative approaches can be more valuable, although there are still issues of data quality and the facilitation of such approaches such as focus groups. There is scope for greater creativity about the use of qualitative methods and use of approaches that engage people, rather than always issuing questionnaires or having focus groups. However, not all methods will work in particular contexts or with different groups of people and it will be necessary to be prepared to be flexible to respond to the any difficulties that arise.

Consider how to make best use of any external input

5.34 These issues of data quality raise an issue about the skills and capacities that exist to undertake monitoring and evaluation. Formative evaluation is likely to involve a greater degree of self evaluation, although does not exclude the use of external evaluator or mentor to act as a 'critical friend'. An evaluation does not automatically have to be conducted solely by an external evaluator. There is scope to use external input in more limited and strategic ways to better effect depending on the skills and capacities that exist within the project or wider organisation.

Involve clients in a real not tokenistic way

5.35 In discussing the involvement of service users in evaluation, the participants at the Exchange Event expressed a desire to be inclusive, but not tokenistic; 'asking questions is not participation'.

5.36 Despite this, a number of the HPIF projects found it difficult to gather any feedback direct from beneficiaries where these were people at risk of homelessness. The Safe as Houses project used the external research input provided through this commission to undertake telephone interviews. Similarly, the researchers undertook interviews with serving and former prisoners on behalf of the Tayside Accommodation and Skills Project. As discussed above, WISH had an external evaluator who designed before and after questionnaires and ran a focus group with little success in terms of the quality of the data obtained. Much of this was dependent on some additional external resource being available to undertake this kind of exercise. In some cases, that was entirely appropriate and served to provide a greater degree of anonymity than might otherwise be the case.

5.37 The Exchange Event participants also remarked that 'people will talk to their peers', but no projects have tried a peer-led evaluation. Service users could be involved in evaluation at a number of stages. An advance on existing practice would be to ask them what the questions should be rather than simply asking them the questions. These issues and challenges are discussed further in the Good Practice Guidance in Annex 1.