Volunteering for All: national framework - literature review

This report outlines a systematic review of the research literature on volunteering.

Approach and Methods

The Stirling literature review method

We used the ‘Stirling literature review method’ to identify, collate and evaluate relevant literature to be synthesised. This incorporates systematic searches of a wide range of databases, filtering of results for relevance, and the use of a specially designed pro forma to systematically extract key information regarding the subject matter, results, and assess the quality of the research as reported. 

This method has been successfully used to provide a rigorous assessment of the evidence base in a range of contexts, e.g. physical activity for older people (Bowes, Dawson, Jepson, & McCabe, 2013); cultural differences in satisfaction with adult social care (Bowes, Dawson, & Greasley-Adams, 2013); home care services for people with dementia (A. Dawson, Bowes, Kelly, Velzke, & Ward, 2015); design of residential environments for people with dementia and sight loss (Bowes, Dawson, Greasley-Adams, & McCabe, 2016). Our search strategy, described in more detail below, is designed to rigorously and systematically interrogate the evidence base to identify research of direct relevance.

The data extraction section of the Stirling pro forma has been tailored to the project and designed to capture data in line with the objectives of the literature review. The data extracted from articles included a summary of the key findings of the work, data about the nature of the research described, and authors’ key conclusions and recommendations for further research. 

In the course of completing the Stirling pro forma, reviewers identified the research design of the item being reviewed, and then answered a series of evaluation questions (linked to the Scottish Household Survey where possible) relating to specific research designs based on standard protocols widely used in reviewing.  These included: the Centre for Research and Development (CRD) Report No. 4 used for randomised controlled trials; Cochrane Effective Practice and Organisation of Care (EPOC) checklists used for controlled before-after studies; and, Critical Appraisal Skills Programme (CASP) assessment criteria used for literature reviews and qualitative studies (CASP, 2013; EPOC, 2015; NHS CRD, 2001). Having responded to detailed questions about the design of the study, its conduct, and conclusions, reviewers were then asked to rate it as of high, medium, or low quality and to record their reasons for doing so. Thus, each study was quality assessed according to specific criteria relating to studies using the same approach, and assessments were structured and consistent within study type without implying a hierarchy between types of research evidence.

The reviewing process and numbers reviewed

The literature review filtering process is visualised in Figure 1 below. A total of 37,031 papers were returned by the database searches. The development of the keywords for the search is detailed in Appendix Two (removing duplicates left a working database of 30,234 papers).

Figure 1 The Literature Review Filtering Process

Figure 1 The Literature Review Filtering Process

We conducted some keyword-based batch deletions[2], primarily targeting medical articles where the term ‘volunteer’ was used in the context of trial participants, which removed 10,849 irrelevant papers, and a further manual inspection of duplicates and non-English articles to remove a further 5,191. This left a database of 14,194 papers to be considered by the review team. Full details of the literature downloaded are contained in Appendix Three.

Examination of the paper titles by the review team identified 2,204 papers for abstract review. These papers were then scored on the basis of their title and abstract, identifying 735 potential papers for full text review.  We used a priority scoring system, detailed in Appendix Three, to identify 130 papers to be read by the reviewers, with 17 being removed as not relevant following the full text review.

The papers that were given a full text review were also assessed for their quality as evidence, taking into account their sample, research design, methods and presentation of results. We have indicated a summary of the quality of evidence for each topic grouping using a traffic light system, where green indicates that more than 30% of papers reviewed were assessed as high quality; amber indicates that most papers were medium quality, and red indicates that more than 30% of the papers read were of low quality. This provides some indication of where the evidence is strongest, and where there are still gaps in our knowledge about volunteering.



Back to top