Appendix C: The monitoring and evaluation framework
In this chapter, we describe the process that was used to develop the monitoring and evaluation framework for the FNP, and summarise the content of this.
1.1 Context for developing the monitoring and evaluation framework
Olds and colleagues believe that the effectiveness of the FNP model is contingent on it being implemented as intended. For this reason, delivery of the FNP is tightly-specified in many respects. Thus, for example, eligibility criteria, training requirements, case load size and the number, timing and to a large extent, the content of the visiting schedule, are all pre-defined. As such, these requirements are intended to provide a blueprint to guide implementation (thereby ensuring consistency with Olds' 'tried and tested' model).
To assist this, the FNP is accompanied by a series of fidelity requirements and 'stretch goals': the former are tightly specified criteria with which the project is expected to comply whereas the latter are more fluid and aspirational.
As part of our contract for the commissioned evaluation, we were to develop a monitoring and evaluation framework that would capture:
- data to provide ongoing feedback to the Project Board and project implementers on whether the programme is being implemented as intended, in particular whether it complies with the fidelity requirements as set out in the licensing agreement;
- system alerts in order that the Project Board and implementers are given advance warning of any actual or likely failure to meet these fidelity requirements; and
- data on outcomes of interest, in particular those that are considered to be particularly relevant to understanding the FNP's implementation in Scotland.
1.2 Our approach
We used logic modelling to establish stakeholder consensus on what the FNP in Scotland was intending to achieve and how, and in turn used this as the basis to develop its monitoring and evaluation framework.
1.2.1 About logic modelling
Logic modelling is being increasingly used as a tool to assist in the process of outcome focused planning, the implementation of projects and programmes, and in the development of associated monitoring and evaluation frameworks.
Essentially a logic model is a convincing picture linking intended results with planned activities. Thus a logic model can be used to make explicit available inputs, the projected sequence and timescales of activities and associated outputs, and the connections of these to the intended outcomes.
The process of developing a logic model creates opportunities to highlight assumptions underpinning the successful implementation of a programme and its effectiveness in achieving the desired outcomes. This process of identifying underlying assumptions can help stakeholders consider whether there are any flaws in the logic of their model, and therefore whether and how these might be overcome.
The process also encourages stakeholders to identify external factors over which they have no control. In turn, this can prompt them to consider whether there are any contingency plans that they should put in place to limit any risk to programme implementation and/or effectiveness.
Furthermore, developing and agreeing a logic model can be useful:
- to engage stakeholders in the planning process; and
- to help them understand the contribution that they can make and for which they might reasonably be held accountable.
In addition, and most importantly, the process provides an opportunity to jointly consider whether stakeholders' plans are not only logical/ plausible, but also do-able and testable.
1.2.2 How logic modelling can inform the development of a monitoring and evaluation framework
Setting out the projected sequence of activities, outputs and anticipated outcomes in a well specified manner provides a basis for testing whether a programme has been implemented as intended and achieved the changes that are envisaged. Thus it provides a basis for:
- monitoring process and progress;
- assessing effectiveness; and
- articulating assumptions that can or should be tested out via evaluation.
For these reasons, the development of a logic model provides a useful basis to guide the development of a monitoring and evaluation framework.
1.2.3 The development of logic models for implementing FNP in Scotland
We facilitated a series of discussions the purpose of which were to develop three logic models:
- a high level 'Google earth' model showing how the FNP is viewed to contribute to outcomes of national (Scottish) interest in the short, medium and long term;
- a detailed implementation logic model, explicating key outputs of the project including (but not restricted to) its fidelity requirements/stretch goals and for which the project is obliged to collect monitoring data; and
- · an embedded implementation logic model that detailed the stakeholders' collective view on the short term outcomes that the project is expected to achieve i.e. the outcomes that the project is expected to achieve over the period of its implementation i.e. in supporting teenage mothers up until their children are two years old.
In the discussions surrounding the development of the logic models, we highlighted a number of issues that we suggested had implications for FNP implementation. These included the following:
- The absence of any contingency arrangements if any members of the FNP left their post/went off sick: as delivery relies on FNs attending bespoke training, we suggested that it might be useful to create a 'bank' of additional health visitors by training them in the FNP programme.
- FNs would be unlikely to have the capacity to cover others' caseloads: the English evaluation highlighted the excessive demands on family nurses resulting in them working more hours than they were contracted to do: this means that existing
- The recruitment requirements were ambitious and in view of this and the challenges involved in setting up any new programme, we suggested that it might be desirable to allow a longer 'lead in' period
- The absence of an FNP database together with the short/fixed term nature of the FNP (Lothian) Lead's contract (until August 2011) were seen as potential threats that might compromise the quality, interrogative potential and maintenance of implementation data that are in themselves, a requirement for the FNP licence
- The English evaluation highlighted that the quality of some of the data collected via the FNP paperwork are questionable e.g. the timing and phrasing of questions on domestic abuse data are such that not only may (early) disclosure be low, they do not allow an assessment of any change that has taken place during the programme. We suggested that the latter weakness is problematic if domestic abuse is an outcome of interest.
Furthermore, one of the stakeholders indicated that whereas the FNP uses the Hospital Anxiety and Depression Scale ( HADS) to assess maternal mental health, it is routine practice in Lothian to use the Edinburgh Postnatal Depression Scale ( EPDS) and that communication/referrals to service providers would require the use of this latter tool. This stakeholder therefore suggested that the FNP routinely use this tool either instead of the HADS, or in addition to it.
These three logic models, which reflect the consensual view of the stakeholders, are provided in Appendices.
In turn, the latter two models ( i.e. the implementation models) provided the basis for a paper in which we scoped out options for inclusion in the monitoring and evaluation of the project 1
Subsequently, stakeholders were asked to consider these options, and identify their priorities for inclusion in the final monitoring and evaluation framework. As such, it was agreed that decisions regarding priorities should be underpinned by a number of considerations, including:
- the feasibility of collecting data in a consistent and timely manner;
- the (anticipated) acceptability of the data collection measures for both the families and for the family nurses;
- the requirement for good quality data;
- the need for meaningful data (in particular, the availability of routinely collected data from across Scotland or Lothian that can be used for comparative purposes and thereby aid attribution of any improved outcomes to the FNP);
- learning from the English evaluation ( i.e. conclusions regarding measures that were included in the English evaluation design, but subsequently were considered to be of limited value); and
- 'added value' i.e. the data that are collected should provide insights that are important to understanding the project's implementation in Scotland, and should not attempt to determine whether or not the project is effective in achieving those outcomes that are being assessed as part of the larger and more resource-intensive ( RCT) evaluation of the English pilot sites.
In addition, and informed by the discussions regarding the alignment with and potential contribution to Scottish (national outcomes), a further consideration was whether the project might want to collect data that might evidence its contribution to relevant national outcomes e.gHEAT 7 and its target to increase the proportion of babies exclusively breast fed at 6-8 weeks.
1.3 Terms of reference for the monitoring and evaluation framework
The monitoring and evaluation framework was developed to map out the information that stakeholders identified and agreed as priorities in relation to the FNP in Scotland.
Importantly, it was intended to indicate not only which data would be collected, but also by whom. As such, decisions on who would collect data were underpinned by the acknowledgement that:
- the FNP would collect information from all women who take part;
- the independent evaluation would collect depth information from a sample of up to 15 women and their family members (on four occasions: during pregnancy, early in the postnatal phase, and when the children are 12 months, and 18-24 months old), and from stakeholders who are key to the planning and delivery of the programme.
The implication of this was therefore that any decisions to collect data from all women would require that this be performed by the FNP team.1.3.1 Roles and responsibilities
A vast array of documentation is integral to the FNP programme and a requirement under the licensing arrangements: in addition to the mandatory records of each visit, there are forms that are administered to guide family nurses' decisions and practice, including the support that the clients may need e.g the maternal health assessment when clients' embark on the programme ( UK005).
In addition to the requirement for the collection and data entry/management from these sources, it was agreed that the FNP team would be responsible for collecting all data necessary for routine (and regular) monitoring of the project's implementation.
It was agreed that this should include collecting the data necessary for establishing and supporting a system of implementation alerts.
Furthermore, wherever there was an identified need for data on outcomes of interest, it was recognised that this would require that data be collected across the full cohort of women. The responsibility for this would therefore fall to the FNP (and not the external evaluation).
The implication of the above was that that the data collected by the FNP should:
- enable an assessment of the extent to which the project has been implemented as intended; and
- have the potential to provide some insights into the project's effectiveness. However, the extent to which these data would actually be able to demonstrate effectiveness would be dependent on factors such as the utility/robustness of the data collection instruments, the availability of comparative data ( e.g from across Scotland), etc.
In contrast to the role of the FNP team, it was acknowledged that the data to be collected by the external evaluation would be qualitative in nature and focus on processes. It was agreed that its remit should include:
- testing out the assumptions on which the programme rests;
- exploring views on the feasibility/do-ability of the programme (and the fidelity requirements), particularly in view of the resources available and the timescale for implementation; and
- seeking to identify whether there is any qualitative evidence to indicate that outcomes may have (plausibly) arisen as a consequence of the programme.
This apparent division of roles belies the symbiotic relationship between the internal and evaluation however. In acknowledgement of the importance of this two-way relationship, it was agreed that the Lothian FNP implementation lead (who has lead responsibility for the internal evaluation) should participate in key meetings of the external evaluation team.
1.3.2 Key questions for the external evaluation
As indicated above, it was agreed that the external evaluation would concentrate on exploration of: some key assumptions; the programme's do-ability; and views on whether (and if so, how) FNP is effective.
Thus, in terms of exploring key assumptions, there was agreement that ones in the bulleted list below should be explored by the external evaluation team, i.e:
- the programme is reaching those whose babies are most likely to be at heightened risk;
- there will be a high uptake of the programme, and once enrolled, few will drop out;
- the programme will be valued/highly acceptable to families and will be felt to be non-stigmatising in nature;
- the family nurses will establish a therapeutic relationship with the families, and that this will engender open dialogue on matters concerning the mothers' and their babies' well-being and development;
- the service infrastructure is supportive and able to respond to referrals made by the family nurse for additional support; and
- the FNP is a better/more effective service than that routinely provided to the target group.
- In terms of the feasibility considerations (i.e. the do-ability of the programme), there was agreement that the external evaluation team consider whether:
- the fidelity requirements are realistic (and compatible with the delivery of person-centred support);
- the allocated budget is sufficient to deliver the programme as intended;
- the training and support that family nurses receive is sufficient to equip them for their role; and
- the demands on the FNP are manageable, both in terms of the delivery of the model, and the collection of (good quality) data.
The third responsibility for the external evaluation is to explore whether there is any qualitative evidence that key outcomes have arisen as a consequence of the programme. Those outcomes that the stakeholder group (including the commissioner) indicated as being of particular interest were:
- parenting qualities and behaviours including responsiveness, warmth, and attachment;
- parental involvement in learning;
- protective health behaviours, such as smoking cessation in pregnancy;
- breastfeeding; and
- psychological resources, including self-esteem.
To address the stakeholders' interest in these outcomes, it was agreed that the external evaluation includes a focus on the plausibility/likelihood that the FNP has influenced these. Doing so was felt to require consideration of:
- data provided by FNP ( i.e. from the extensive records that are integral to the delivery of the programme)
- routinely collected data sources, where available ( e.g. breastfeeding rates for FNP mothers as compared with women living in areas of deprivation); and
- data collected by the external evaluation i.e. the panel interviews with families and the interviews with the family nurses.
It was agreed that the internal and external monitoring and evaluation would focus on:
- distilling learning on FNP delivery in Edinburgh, including the barriers faced;
- understanding and exploring views on the skills, systems and infrastructure believed to be necessary to implement the programme, and challenges faced in achieving these; and
- on the basis of these insights, contribute to national learning on how the programme (or aspects of it) might be used in the future.
There was agreement also on what the monitoring and evaluation would not tell us. Thus:
- while we will learn whether/how the FNP was implemented as intended and the reasons for this, the evaluation will not be able to demonstrate whether the model - if implemented according to the fidelity requirements - would result in improved outcomes in Scotland;
- while we will obtain some insights as to whether and if so how, the programme might have plausibly) influenced the mothers in some key areas ( e.g. breastfeeding, parenting), we will not be able to conclude that it actually effected change that would otherwise not have happened; and
- while we will hope to develop some insights into factors associated with engaging teenage mums and working with them in a person-centred manner to address (and improve) outcomes for them and their children, the evaluation will not provide a blueprint for the future of community nursing in Scotland.
1.4 The monitoring and evaluation framework
The development of the monitoring and evaluation framework was underpinned by the implementation logic models together with the stakeholders' agreements about the particular elements/dimensions which they considered to be a priority for the Scottish evaluation.
This framework did not (and was not intended to) detail all the information that would be collected over the life of the FNP: it was acknowledged that the FNP would be collecting a wider set of data than those included in the framework, for example, to help the family nurses tailor their activities to the needs of individual families.
Furthermore, it was acknowledged that the external evaluation would also collect data on more issues than those detailed in the framework: as the project evolved, additional areas of interest were likely to emerge that ScotCen explores in more detail. This iterative approach is a common (and valuable) feature of qualitative evaluation methods.
The framework was built by going through each of the outputs and outcomes in the implementation logic models, and using these as a basis to formulate a list of key questions. The ones that were identified for inclusion within the framework were as follows:
- Does the team receive the training and support intended, and develop the knowledge and skills required?
- Are fidelity requirements met for recruitment?
- Does project meet the fidelity targets for attrition?
- Do the family nurses carry out the intended number of visits?
- How feasible/appropriate is the visiting schedule?
- Do family nurses conduct their consultations in line with the fidelity criteria?
- Is the FNP structure useful/appropriate?
- Are FNP data entered into the FNP database in a timely fashion?
- Is there any evidence that the FNP leads to use of screening/antenatal services and recommended antenatal practices ( CEL 31)
- Is there any evidence that clients feel better prepared for birth?
- Is there evidence that the FNP results in improved knowledge /health behaviours in clients prior to/following birth of baby?
- How good are the pregnancy outcomes of those enrolled on the programme?
- Is there any evidence that the programme improves knowledge about how infant health can be promoted, and for any such knowledge to be translated into behaviour?
- Is there any evidence that the FNP engenders positive parenting practices and bonding?
- Is there evidence that the client knows about key hazards and engages in practices to keep child safe?
- Do clients mobilise support within personal networks?
- Is there any evidence that FNP reduces domestic abuse?
- How involved are fathers in the FNP process/visits?
- Is the FNP seen to engender fathers' involvement?
- Is there any evidence that mums feels more supported and less anxious/depressed because of the programme?
- Is there any evidence to indicate that FNP leads to fewer unplanned pregnancies, and help mums work out what they want to achieve, and supports them in realising their plans?
- Is there any evidence that the FNP programme leads to improved child health and development?
Using a unique code to enable cross referencing of each evaluation question to the outputs and outcomes in the implementation logic models, the framework detailed:
- the indicator(s) that would be used
- whether this output/indicator was a fidelity requirement or goal (and if so, defining the criteria for a 'programme alert' that would serve to provide a timely warning of any failure to achieve the programme's recommended inclusion criteria and/or intervention 'dosage')
- who would collect the data, how and when;
- who would analyse the data; and
- whether there were additional considerations that should be borne in mind.
1.4.1 Identification of problematic data/data sources for evaluation purposes
There were a few outcomes (and associated questions) that were considered to be of interest, that, while included in the monitoring and evaluation framework, were (explicitly) identified as problematic either because of difficulties in collecting good data and/or of drawing conclusions that could enable attribution of any effect to the FNP. For example, the question is there any evidence that the infants in the programme are not being maltreated? will be monitored (routinely) but the associated question - can this be attributed to the FNP? cannot be answered as the 'expected' numbers of cases is likely to be very small (given the small cohort) making it difficult to draw any conclusions about FNP effectiveness in relation to this outcome. Furthermore, there is no obvious and useful proxy measure: for example, referrals to social services could be considered as a 'good' or as 'bad' thing. Similarly, the question -is there any evidence that the FNP results in fewer accidents? is problematic as those data that are routinely collected do not provide a meaningful basis for comparison because, for example, not all cases of unintentional injury result in admission to A&E departments, and not all admissions to A&E are for serious injury.
While the FNP is required to collect data across a number of measures (such as accidents, maltreatment etc.) in order to ensure it meets the needs of its clients, not all these measures are core to the evaluating whether the FNP has been implemented as intended or whether/how it has been effective. Rather they provide additional information to help us (simply) describe and profile those involved in the programme. The monitoring and evaluation framework was agreed and 'signed off' in February 2010. Since then, it has provided the template for capturing the 'core data set' on which the commissioned evaluation is based.
1 Scoping options for monitoring and evaluating the FNP: a preliminary discussion paper. January 2010
There is a problem
Thanks for your feedback