Building trust in the digital era: achieving Scotland's aspirations as an ethical digital nation

An Expert group supported by public and stakeholder insights has reviewed evidence and provided recommendations which will support and inform future policy. The report has a focus on building trust with the people of Scotland through engaging them in digital decisions that affect their lives.


Algorithmic Decision Making

Objects of Trust

Technology: Is it reliable? Is it robust? Is it safe?

Fairness: Could it reinforce discriminatory or unfair practices?

Transparency: Are the people behind it being truthful?

What is Algorithmic Decision Making?

To achieve Scotland’s vision of becoming an Ethical Digital Nation, it is important that any algorithmic decision-making be supported by reliable, fair and representative data and technologies. This means that an appropriate level of transparency and scrutiny is available for any technology-supported processes or practices used to make decisions about citizens’ lives. Algorithms that impact individuals should, to a certain extent, be made transparent and accessible to help build trust in how they come to their conclusions.

Algorithmic decision-making is often reliant on large amounts of data. It uses this data, through standardised rules, to derive useful information, and infer correlations, based on historic patterns and behaviours, to support decision-making. This can result in issues in explicability, due to the complex nature of the computational techniques. This type of decision-making can happen across a wide variety of activities – from credit card approvals, to targeted advertising, to automated interviewing in recruitment processes.

These decisions can have real consequences on how people live their lives. To make sure that the decision algorithms make are trustworthy and fair, there is a need for oversight and transparency in order for the public to feel confident in the legitimacy of these decisions.

Why are Reliable, Representative Data and Technologies Underpinning Algorithmic Decision Making Important?

Algorithms can be beneficial in making processes more streamlined, efficient and fairer, both from the view of the business and the user. However, as long as algorithms are used to determine outcomes for citizens, there will be a need to ensure that accountability for the design and use of the algorithm is clearly established, particularly in the event that they are suspected to be inaccurate or unfair.

Modern algorithmic systems are never neutral: they capture goals, preferences and biases of the data input and through the design of the model itself. Unless care is taken, the algorithmic systems created using data on past behaviour reflects both the wanted and unwanted biases present in society, and can perpetuate existing societal biases. For example along the lines of gender, ethnicity, age or even the area where someone lives. Reinforcing existing societal biases can mean that already marginalised groups are further discriminated and continue to miss out on support and opportunities.

Ensuring that algorithmic decision-making is underpinned by reliable, representative data and technologies is a fundamental component of being an Ethical Digital Nation. For this type of decision making to be fair and inclusive, there is a need to reduce the risk of discrimination against individuals and groups based on unreliable or inaccurate data and technology, and to ensure greater transparency about where data being used to determine outcomes is being drawn from. Scotland’s AI Strategy[12] is developing a Scottish AI Playbook that will serve as an open and practical guide to how Scotland can be trustworthy, ethical and inclusive in its use of algorithms across various scenarios.

“They don’t seem to be sophisticated enough, humans are much more complex and nuanced than data and machines can ever be.”

National Digital Ethics Public Panel Insight Report, 2021, P. 47

Case Study:

Gambling

Dr. Raffaello Rossi & Prof. Agnes Nairn

Social media advertising spent is increasing rapidly and the basis for many modern advertising campaigns. Already in 2018, the gambling industry invested a massive £149m into social media marketing – which has likely increased substantially in the past three years.

The increasing use of social media (gambling) advertising, however, raises three general concerns: First and foremost, most social media platforms tend to be composed of relatively young demographics. On Twitter, for example, the largest demographic group are users from 18-34 years old (51.8% of all users). On Snapchat 82% are aged 34 or younger. In addition, on TikTok even 60% are aged 9-24. Any advertising posted on these platforms is therefore likely to disproportionally affect children and young people.

Second, the cascade of social media advertising – which is considerably cheaper to launch and thus, resulting in more adverts per pound – raises substantial challenges for regulators due to its volume. Even the CEO of the UK Advertising Standards Authority publicly admitted during a House of Lords Committee Inquiry that methodological challenges render it highly complex for his organisation to identify whether advertisers are targeting specific (vulnerable) groups or, indeed, even know the volume of advertising to which these groups are exposed online. The combination of regulators not being able to uncover irresponsible social media advertising activity, together with the methodological challenges of analysing this massive amount of data, could potentially create a “dark space” with no one obeying the advertising rules, no one able to monitor this, and therefore no one able to regulate or inform policy thinking (Rossi et al., 2021).

Finally, and related to the previous point, current UK advertising regulations are outdated. The Advertising Standards Authority (ASA) argues that UK advertising is well regulated and under control, but the stipulation that rules “apply equally to online as to offline advertising” makes little sense given the 'social' characteristics and possibilities of social media that simply don’t apply to traditional media. For example, the 'snowballing' effect created when users follow and engage with social media posts from companies’ accounts only applies to social media. Through snowballing, the sender of the post (e.g. the company account) has no control who will end up seeing their post – which means it might inadvertently expose children to harmful adverts. This powerful mechanism is currently completely unregulated.

A new but highly trending social media advertising technique called content marketing (sometimes also 'native advertising') raises severe issues in relation to children. Such efforts try to bypass protective heuristics that warn users internally: Be careful, this is an advert. Instead, they are designed to create a warm fuzzy feeling or to make their audience giggle. As social media users who see such a funny post like, comment on and share it, it gains momentum – might go viral. We know from previous research that

children are more affective (Pechmann et al., 2005) and do not have the same advertising recognition skills as adults (Wilcox et al., 2005). With this new form of advertising, however, it is nearly impossible for children to immediately recognise the posts’ persuasive intent – breaching a fundamental marketing pillar:

“Marketing communications should be clearly distinguishable as such, whatever their form and whatever the medium used.” (International Chamber of Commerce (ICC)). Although content marketing poses a real danger of luring children into addictive behaviour, it is nearly completely unregulated. Currently, there are regulations set by the Committee of Advertising Practise (CAP) that prohibit, for example, that adverts for gambling or HFSS targets or appeals to children. However, such codes do not apply to content marketing as they are not considered as advertising by the regulator (see CAP, 2020). Indeed, currently advertisers in the UK can do anything they like within content marketing posts. An alcohol brand account could post content marketing ads, which include children, and a gambling brand could post content marketing ads that are obviously targeted at children. Both cases, of course, would be strictly prohibited for 'normal' (i.e. non-content marketing) advertising.

In our research we found that out of 888,745 UK gambling adverts onTwitter, around 40% were classified as content marketing (Rossi et al., 2021). In a subsequent study, we found that these content marketing adverts were almost 4x more appealing to children and young persons (11-24) compared to adults: 11 out of 12 gambling content marketing ads triggered positive emotions in children and young persons – only 7 did for adults (Rossi & Nairn, 2021).

Introduction to a Case Study: Video Games in Scotland: Risks, Opportunities and Myths – Dr. Matthew Barr

Video games play a role in a significant number of peoples’ lives across Scotland, with UK-wide data suggesting that 86% of people aged 16-69 have played computer or mobile games in the last year (UKIE, 2020). Scotland is also a significant producer of video games, with games companies including Rockstar, Outplay, Blazing Griffin, Ninja Kiwi, No Code, Stormcloud, and many more developing games here. As both producers and consumers of video games, it is imperative that we understand the ethical and social implications associated with playing them. However, our understanding of the issues is muddied by a mixture of bad science, anecdotal reports, and ill-informed media coverage. This case example provides a balanced, evidence-based overview of the science behind games’ potential impact on player well-being. As such, it looks at the relationships between video games and mental health, video games and violence, and online games and gambling. In particular, the largely positive impact of video games on players’ well-being during the COVID-19 pandemic is examined, with reference to peer- reviewed research carried out by the University of Glasgow. Meanwhile, the presence of gambling in online video games – in the form of so-called ‘loot boxes’ – is explored, with expert legal commentary from a Scottish solicitor. Click here to read the full case study.

How Can We Hold Algorithms Accountable?

As long as algorithms are being used to determine outcomes for citizens, there will be a need to ensure that accountability for the algorithm is clearly stated, for the purposes of an event where they are inaccurate or unfair. If the algorithm is determining whether someone can access education, buy a house or get a job, they need to feel confident that the outcome is justified. It is not just the outputs that need to be considered, but also how the algorithms are deployed. They should always be applied fairly and equitably, which is best done through human oversight and appropriate, potentially participatory, governance.

There should always be a way to determine how a decision was made. However, total transparency may not always be appropriate or sufficient. Some companies may want to keep their algorithms secret to avoid competition, or sometimes the data used could be sensitive and personal, and information that warrants high levels of protection.

Promoting better regulatory practices around transparency will push companies to provide more accessible explanations of how their algorithms work. Similarly, external audits could provide an extra layer of scrutiny around algorithmic decision making that can reassure the public certain standards are being met. One of the most important considerations is to think about including a ‘human-in-the-loop.’ This is a human ‘sense check’ that helps to validate the decisions that the algorithm is making. When implemented with appropriately informed, skilled and empowered human oversight, ‘human-in-the-loop’ protocols can help in ensuring better situational judgement and nuance can be included in the final decision.

Contact

Email: digitalethics@gov.scot

Back to top