Delivering Scotland's circular economy: Proposed Circular Economy Bill - Consultation analysis

Report of the analysis of responses to the consultation on proposed provisions for a Circular Economy Bill.


3. Approach to the Analysis

3.1 Methodology

3.1.1 Exporting the data

Ricardo was granted access to the activity dashboard of the consultation on Citizen Space. The “approved responses with redaction” were downloaded weekly (at a minimum) as a Microsoft Excel spreadsheet and saved to a secured folder. Ricardo then used their in-house tool developed using Microsoft Excel to import the data from the exported database onto the tool to perform the analysis (see Section 3.1.3 for further details on this tool).

3.1.2 “Cleaning” of the data

Once the data was imported into the tool, Ricardo “cleaned” the imported data identifying blank and duplicate responses from the same individual or organisation manually, using tools such as Microsoft Excel’s data filter tool. Blank responses were excluded from any further analysis. Duplicate responses were combined by copying and pasting the different answers into one single answer. The duplicated lines were then also excluded from any further analysis. None of the excluded responses were deleted from the tool, instead a comment was provided in a dedicated column. This ensured that all responses (including excluded ones) were recorded and that changes to the database could be audited and quality controlled.

Ricardo also re-assigned any responses that were not relevant to the questions to a better suited question and the content of the emails to the relevant questions where feasible. A comment in the relevant column was provided to record for any changes made to the responses.

At this stage of the process, Ricardo also identified the campaign responses, determining whether the answers were standard or non-standard, and the other responses received by e-mail. This was done by adding a comment in a dedicated column of the tool. Further details on the management of campaign responses and other e-mail responses are provided in Section 3.1.4.

3.1.3 Undertaking the qualitative analysis

Ricardo used their in-house qualitative survey analysis tool to perform the qualitative analysis of the open questions. The user-friendly Microsoft Excel spreadsheet allowed users to manually extract the key concepts of the qualitative narrative and to provide a sentiment on whether the content was rather positive or negative.

Key concepts

To extract the key concepts from the qualitative narrative, team members reviewed each entry of the database manually, read the answers one by one and identified a maximum of three key concepts (one key concept being a word or a short statement), making sure to duplicate the exact wording of the respondents.

All key concepts were then identified manually from the tool and an external word count tool was used to identify key cross-cutting themes for each open question. Key concepts and results of the word count are summarised in Section 4. Direct quotes from entries specifying that they should not be published were not included.

Sentiment

Team members assessed the sentiments manually, choosing the level of sentiment from a drop-down menu. A score was automatically allocated to each level of sentiment to allow for a detailed analysis of the sentiment of each proposal and theme by stakeholder group (individuals and organisations). Please note that some of the open questions were not fit for a sentiment analysis and therefore Ricardo was not able to provide a sentiment for all the open questions of this consultation. The list of the questions for which a sentiment was not provided is available in Appendix 3 – List of questions excluded from sentiment analysis.

Please note that answers that did not add any additional information were not given a sentiment as it was felt that this would skew the overall sentiment rating. Similarly, where answers indicated that respondents were too unfamiliar with the topic to comment, the sentiment was not assessed, however, where comments indicated that they did not understand the proposal, these were included to demonstrate where additional clarity may be needed. Examples of answers excluded from the sentiment scoring include ‘no comment’, ‘nothing to add’, ‘no further view’ and ‘don’t know’. However, answers such as ‘maybe’, ‘don’t understand’, ‘no, I support it’, ‘not sure what this will mean’ were included.

Table 2 below provides the list of the sentiment with a description and the associated score.

Table 2: Sentiment description and scoring
Sentiment Description Score
+2 Very positive (e.g. strong approval/agreement) 5
+1 Positive (e.g. approval/agreement with some negative factors to consider) 4
0 Neutral (e.g. neither approve/agree nor disagree/reject, or same amount of positive and negative factors to consider) 3
-1 Negative (e.g. disagree/reject with some positive factors to consider nevertheless) 2
-2 Very negative (e.g. strong disagreement/rejection) 1

Please note that closed questions were included in the sentiment analysis but did not need to be analysed further. Closed questions were scored automatically as per Table 3 provided below.

Table 3: Closed questions scoring
Answer Score
Yes 5
No 1
Neither agree nor disagree 3

Using Microsoft Excel’s formula, statistics on each proposal were calculated automatically based on the average sentiment score of the relevant questions for each entry of the database. Similarly, statistics on themes were calculated automatically based on the average sentiment score for the proposals relevant to the theme for each entry of the database. Statistics that are provided in this report include the following:

  • Most common score.
  • Overall, individuals’ and organisations’ average score.

3.1.4 Managing campaign responses and other responses received by email

Campaign responses

Rather than being submitted through Citizen Space, campaign responses were received by the Scottish Government by e-mail and uploaded on a secured shared folder to which Ricardo was granted access. Due to the time limitations on the analysis, it was not possible to review each email individually.

When looking at the campaign responses received by the Scottish Government by e-mail, Ricardo noticed that most of the emails had a very similar size (around 21KB) and that only a small number of these emails had a significantly different size. Therefore, it was decided that the Scottish Government review all emails received that were greater than 22KB and review a random selection of 22KB emails to identify any non-standard responses. Out of the emails checked, 89 emailed responses differed from the standard text and were uploaded by the Scottish Government on Citizen Space. Ricardo reviewed these responses in more detail and found that only 14 were deemed to have added value to the consultation beyond the standard text and were analysed separately.

To allow for an efficient management of the standard responses, the Scottish Government uploaded one standard campaign response to Citizen Space under a PDF format that Ricardo treated by re-assigning its content to the relevant questions where feasible. Ricardo then duplicated this response as many times as the number of standard campaign responses that were received by the Scottish Government by email. This content was also used for the non-standard responses, to which the additional content was added as explained above. Following this, the qualitative analysis was performed as described in Section 3.1.3.

Other responses received by e-mail

The Scottish Government uploaded all the other responses received by e-mail on to Citizen Space under a PDF format and Ricardo treated them all individually by re-assigning their content to the relevant questions where feasible. Following this, the qualitative analysis was performed as described in Section 3.1.3.

3.2 Limitations, Gaps and Assumptions

One of the main limitations of the approach is that the “cleaning” of the data was done manually. This means that the activity is at higher risk of human error. However, this was mitigated by the quality assurance process during which the Project Manager oversaw the task and conducted a review of the final database.

Another main limitation is the fact that each user of our in-house qualitative survey analysis tool has had an individual view on what the key concepts are and on which level of sentiment should be selected, and also that the analysis was done manually, allowing for potential error. To mitigate this, we briefed users thoroughly on the project and the tool and we prepared explanations on how to use the tool, details on each proposal and theme and descriptions of what each level of sentiment referred to. Also, the quality assurance process where the Project Manager reviewed the work completed by the users ensured that potential errors were picked up and addressed.

Please note that the in-house qualitative survey analysis tool had already been reviewed using our automated in-house Microsoft Excel QA tool. This process reviewed formulas and highlighted any inconsistency meaning that it is not expected that our tool generated a limitation to the task.

Due to the fact that standard campaign responses could not be analysed individually within the timescale of this analysis, a complete analysis per respondent type (individual or organisations) could not be provided.

The main assumption for this analysis is related to the way the non-standard campaign responses were identified. As explained above, it was assumed that, when the size of the e-mails received by the Scottish Government and identified as being a campaign response was 22KB, the answer was considered as standard, and that all the other emails were non-standard unless their review found otherwise. This means that all the emails received by the Scottish Government and identified as campaign answers that have a size of 22KB have not been reviewed but directly classified as standard answers (unless their title differed from the standard title, in which case the answer was reviewed).

It should also be noted that each response to this consultation was treated as equal in weight. This means for example that each standard campaign response was counted for individually and that organisations responding on behalf of their members were counted as one response even though they might have been representing multiple individuals.

Contact

Email: circulareconomy@gov.scot

Back to top