At the start of the consultation analysis, the responses extracted from the Citizen Space portal, as well as the responses provided by email that mirrored the format of the consultation questionnaire, were merged into a single dataset using Python. All responses were treated equally regardless of how they were submitted. During the manual review of responses, the research team screened for those that were clearly intended as offensive, abusive or explicitly vulgar, with no responses being removed as a result of this screening.
The consultation responses were also screened to identify duplicate responses or campaigns organised by external groups or individual coordinated responses to the consultation. As part of this process we identified 1,233 near identical responses relating to a single organised campaign. To avoid the thematic analysis being dominated by the views put forward by one specific campaign, the close or exact duplicate responses associated with this organised campaign have been summarised in a standalone section of the report and are then counted only once within the rankings of themes presented in the question-by-question summary.
The consultation also received responses by email or post which did not follow the prescribed question format and did not always refer to specific recommendations. Given that some of these responses could not be directly mapped to specific consultation questions or recommendations, the insights raised have been summarised and reported on separately in a sub-chapter of this report. In some cases, due to the non-standard nature of these responses, some may not be accurately reflected in the breakdowns of the totals in the quantitative analysis. Nonetheless, the themes raised in the non-standard responses are reflected in the executive summary and summary of overarching themes.
Approach to analysis of open-format questions
The consultation included 58 open-format questions with free-text fields, and there was no limit to the amount of text which respondents could write in their answers. All responses to the open-text questions were read in full by our team of researchers, with thematic analysis of each response being conducted to capture the main opinions expressed by respondents in overarching themes as well as to understand the reasoning behind answers. All themes identified were fed into a comprehensive Excel-based codebook of themes, with regular project team meetings being used to ensure themes were defined consistently across researchers. This codebook was then used to identify the most frequent themes raised for each question, with the most prevalent themes summarised in this report.
Responses to the consultation differed in depth and approach, and while many responses included evidence to back up opinions, other responses primarily expressed preferences, concerns or expectations without further analysis. Our approach to handling these differences involved:
- Capturing the main idea regardless of whether it was expressed as a personal view or if evidence was provided to sustain the argument.
- Including every response in the analysis, reading beyond grammar or spelling mistakes and capturing the main idea regardless of difficulty in distilling the information.
Supplementary quotes from respondents have been used in the report to support many of the highlighted themes and views raised in response to the questions in this consultation. The quotes used are generally intended to be representative of themes or views raised by multiple respondents, unless otherwise stated. Quotes have only been attributed to specific segments where respondents have given express permission to publicly share the quote.
There is a problem
Thanks for your feedback