Building trust in the digital era: achieving Scotland's aspirations as an ethical digital nation: case study supplement

This paper is a supplement to the ‘Building Trust in the Digital Era: Achieving Scotland’s Aspirations as an Ethical Digital Nation’ Report. The case studies have fed into the core report content, helping to position the ethical challenges relating to digital innovation across a range of sectors.


Harm Protection when Online

Case Study: Elections and Social Media – Prof. Shannon Vallor

Online behaviours aimed at influencing political opinions and voter choices represent a substantial portion of social media activity globally and in Scotland. Much of this activity is authentic and ethically benign, even desirable. Social media lower many traditional barriers to political engagement. For those with a smartphone, tablet or computer, the services are free and easy to use. They do not require travel outside the home, or formal affiliation with a party or other political organisation.

However, online social media are widely recognized as contributing to a number of democratic ills: most notably, misinformation (false or misleading information shared unwittingly); disinformation (false or misleading information shared with the intent to deceive); manipulation (targeting emotional or psychological vulnerabilities of others in order to undermine their capacity for reasoned political choice) and inauthentic political behaviour (political activity that misrepresents the intentions, identity or nature of the author or authors).

Of course, misinformation, disinformation, manipulation and inauthentic political behaviour are nothing new; each has been a part of political life since politics began. However, their online manifestations on social media pose unique risks to the health of Scotland’s political community, not only due to the unprecedented speed and scale of their influence, but also the potential to leverage new forms of data and increasingly sophisticated algorithmic techniques to coordinate their impact, disguise their origin, amplify their negative effects, and make them harder for authentic political actors to mitigate or resist.

This case study examines the phenomenon of inauthentic online behaviours designed to influence political opinions and activity in Scotland. Inauthentic online behaviours cluster into several types (Miller et al. 2017), including sock puppets (individual accounts that project a false identity), astroturfing (concealing the sponsorship or other interests behind a message, usually to project an illusion of ‘grassroots’ support and origin), and spambots (automated systems for generating, linking, boosting and coordinating fake accounts and their content).

Of particular interest is the problem of coordinated inauthentic behaviours in political contexts, as these enable harms of greater scale and complexity. The term ‘coordinated inauthentic behaviour’ (CIB) has its origins in Facebook’s own efforts to define activity on their platform that violates their policies not because of false or dangerous content, but because the activity involves agents “working together to mislead others about who they are, or what they are doing.” (Gleicher 2018).

For example, imagine a network of twenty Facebook pages representing themselves as separate, locally grown environmental groups based around the United Kingdom. The individual pages seem benign, but they often share contradictory opinions and data, or directly challenge views expressed on the other pages. Now imagine that this lack of consensus is no accident, as they are all produced by a single group working in a foreign country or for an oil corporation, aiming to sow divisions among the UK environmental activist community and drive negative perceptions of environmental activism among the wider voting public. That is coordinated inauthentic behaviour, and it can be far more politically damaging than single acts of inauthenticity (for instance, a lone teenager pretending to be an MP). Let us go beyond hypotheticals. What kind of coordinated online inauthentic behaviour has been documented in the Scottish political ecosystem, and what harms may follow from such activity?

Between 2018 and 2020, Facebook removed hundreds of accounts linked to the Islamic Republic of Iran Broadcasting Corporation, which were associated with suspicious online activity in numerous countries including the United Kingdom. Pages removed included Free Scotland 2014 and The British Left (Scotsman, 23 Aug 2018); both posted about the 2014 Scottish independence referendum. These efforts preceded the Russian foreign interference campaign associated with the Brexit referendum in 2016. In August 2018, unverified reports and opinion pieces in The Herald (Leask 2018 and Jones 2018) alleged that local Scottish activists may have used ‘retweet bots’ – spambots that use automated scripts to seek out posts to retweet – to boost the hashtag #dissolvetheunion, and to attack pro-independence Scottish women. Later that year, a report commissioned by MEP Alyn Smith (Patrick 2018) confirmed that Scots were a target for malign bots controlled by state and non-state actors, with between 4% and 12% of Scottish Twitter activity determined to be “potentially malign.” Along with the report, a website (scotorbot.scot, currently inactive) was launched to connect people with free ‘bot detection’ tools. In 2020, The Times reported, “SNP cybersecurity experts have detected a rise in divisive social media posts” linked to accounts in the United States, “particularly in relation to transgender rights.” (McLaughlin and Andrews 2020)

What ethical issues does inauthentic online behaviour present, and what are its implications for Scottish democratic health? There are a number of important ethical issues to consider. One question is how to distinguish inauthentic activity by foreign actors purporting to be local, from inauthentic activity originating in Scotland? The former is clearly misleading, but the latter can be as well. For example, using coordinated spambots to automate the boosting of a hashtag creates the false impression that the hashtag is trending because it enjoys the spontaneous and widespread support of many different individuals. So, is the use of automated tools by Scottish citizens to boost the apparent popularity of political opinions within Scotland inherently unethical, or is this merely a new political technique that should be accepted as ‘fair play’? Does it differ ethically from pre-digital modes of cultivating wider reach of political expression, such as anonymous leaflets and signage?

Another question concerns the extent of ethical responsibilities of platforms to detect and suppress inauthentic political activity. Currently, such efforts only remove a minority of the inauthentic accounts active on any given platform. Yet platforms arguably have an even greater ethical duty to prevent this type of online harm, because inauthentic activity is often virtually impossible for the average user to detect on their own. Is it then incumbent upon social media platforms to disable some of the tools and design features that enable coordinated inauthentic activity, even if it comes at a cost to their business? Should platforms be responsible for pouring more of their profits into detection and suppression measures?

Alternatively, might some ‘inauthentic’ online behaviours be legitimate, if they are used by marginalised groups to boost their voices and at last gain a fair hearing in democratic politics? Is this possible without inauthentic modes of digital amplification like spambots? Is there a moral difference between artificially amplifying the message of marginalised or traditionally suppressed voices, and artificially amplifying other types of voices and opinions?

One might ask why is inauthentic online activity a serious problem for democratic health at all, given that deception and obfuscation have always been part of the political landscape? One reason is that inauthentic activity seeks to exploit cognitive biases that are antithetical to effective reasoning and deliberation – such as our tendency to be irrationally influenced by how many times we have heard an idea, or how recently we have heard it, or how closely in our social circle. When we cannot reason effectively, we cannot self-govern effectively. Nor can we effectively deliberate together with our civic fellows. Thus exploitation of these biases at online scales and speeds not previously accessible to political manipulators not only strikes at the weakest point of any democracy, it does so with far greater force than we are used to.

So how can the Scottish government balance the goods of open political discourse and free expression with the need for a political sphere that reflects rather than the distorts the genuine views of Scottish publics? What can Scotland learn from other countries facing the same challenge, and what can Scotland do to show ethical leadership in this regard?

Recommendations

Here are three possible actions Scotland might take:

  • Increase political pressure on social media platforms to invest far more resources in research on inauthentic online political behaviour and most importantly, to make the results of that research open and accessible to governments and citizens everywhere.
  • Encourage Scottish news media to train reporters to use bot-detection and other tools to investigate online phenomena that may be trending or becoming highly visible for inauthentic reasons, and put editorial policies in place to discourage uncritically amplifying or legitimising online inauthentic activity in news media outlets.
  • Establish an independent, non-partisan Scottish research body devoted to detecting, tracking, studying and publishing emerging patterns of online inauthentic political behaviour in Scotland.

The future of democratic publics worldwide, and here in Scotland, is increasingly tied to the social media ecosystem. These are just a few ways we might ensure that this ecosystem enables authentic political flourishing.

Contact

Email: digitalethics@gov.scot

Back to top