Chapter 4: Discussion
The range of questions identified as priorities in the workshops indicates a mixed methods approach to evaluation, combining primarily qualitative investigation of the process of delivery from the perspective of both parents and practitioners, parents' use of the Baby Box, and their perceptions of its meaning and value, with primarily quantitative investigation of the impact of the Baby Box on parenting behaviours and child health outcomes.
We have not explored options for economic evaluation of the Baby Box, as this was not identified as a priority in the workshops. A cost-effectiveness analysis would require the identification of a primary outcome (or a small set of outcomes) which could be used to compare the Baby Box against an alternative course of action. This may not be appropriate given the wide range of aims identified for the Baby Box. A cost consequence analysis, which simply presents the whole array of outputs alongside the costs, may be preferable if an economic evaluation is required.
The consent for research included within the Baby Box registration card provides a potentially valuable register for obtaining samples of recipients but, as it has not yet been used for evaluation, further investigation of the social patterning of consent should be carried out before it can be relied on to obtain representative samples. Likewise, the CHPS Pre-School system opens up the possibility of using routinely collected data to measure outcomes at much lower cost than approaches based on prospective, primary data collection but extensive validation work is needed to establish which outcomes, if any, could usefully be captured from this source.
These considerations, together with the fact that there have been no previous evaluations of Baby Box interventions delivered on a whole population basis, suggest a phased approach to commissioning evaluation may be advisable, with a substantial initial phase of feasibility testing and development work prior to final decisions about the design of a substantive process and outcome evaluation.
The feasibility/development phase might include: analysis of the pattern of consent/non-consent to take part in research among Baby Box registrations according to socio-demographic characteristics such as age, birth parity or SIMD; analysis of the quality of data on outcomes, such as sleeping position, child development, referrals to other services, etc., recorded in the CHSP Pre-School system of assessments; comparison of CHSP Pre-School with systems used elsewhere in the UK for recording early years outcomes, in order to identify possibilities for using geographical controls in the outcome analyses, as well as controls for trends in outcomes over time; and scoping reviews of previous research on the impacts of benefits in kind provided during pregnancy in developed countries, and on methodologies (such as diaries) for collecting detailed information on safe sleeping and other parenting practices that might be influenced by the Baby Box.
The approaches we have discussed for identifying the effects of the Baby Box are observational rather than experimental, in that they would rely on data collected alongside the implementation of the policy rather than randomly assigning some people to receive the Box, and comparing their outcomes with those of people who do not receive the Box.
Observational methods are widely used to evaluate public health and other policies where random assignment is impractical or considered unethical but they are subject to a number of limitations in relation to the kinds of effects that can be measured and the extent to which observed differences in outcomes could be attributed to the Baby Box. These limitations could in principle be overcome by a randomised controlled trial.
While a randomised control trial of the Baby Box scheme as a whole would be impractical in the context of full implementation, it should be noted that experimental trials of variants of the Baby Box ( e.g. with additional items or information) or alternative modes of delivery ( e.g. with additional professional/practitioner input for some or all recipients) would not necessarily be unethical or impractical. Trials may be worth considering if the evaluation identifies areas for improvement.
There is a problem
Thanks for your feedback