Building trust in the digital era: achieving Scotland's aspirations as an ethical digital nation

An Expert group supported by public and stakeholder insights has reviewed evidence and provided recommendations which will support and inform future policy. The report has a focus on building trust with the people of Scotland through engaging them in digital decisions that affect their lives.


Ethical Limits to Monitoring and Surveillance

Objects of Trust:

Technology: Is it reliable? Is it robust? Is it safe?

Privacy: Is my information confidential? Are there Laws/ Regulations to protect me? Are they being enforced?

Transparency: are the people behind it being truthful? Are there other motives?

What are Ethical Limits to Monitoring and Surveillance?

Monitoring involves either active or passive observation of an entity – whether this is a person, place or process. When we talk about surveillance, this can mean extended observation of a defined area or phenomenon, but, more negatively, it can also mean targeted monitoring specifically tracking movements, actions and interactions of people. The most problematic forms of surveillance can be untargeted bulk surveillance that is not driven by a specific suspicion or based on a concrete risk.

Monitoring and surveillance technologies can provide benefits to both the individual and society. For example, CCTV footage can be used in court as evidence to prove someone was in a certain place, or GPS mapping and tracking tools can help with navigation. However, in recent years monitoring and surveillance technology has started to become more pervasive and invasive, often in ways that individuals are unaware of. We have seen a rise in the use of facial recognition technology, and particularly during the pandemic, workplace monitoring. This raises serious concerns not only about the accuracy of the technology, but also about its proportionate use. Furthermore, it can question trusted relationships between individuals and organisations or employees and employers.

“It has both pros and cons. I don’t like the idea of someone knowing where I am, it takes away some privacy. However, on the other hand, it could prove useful when members of the public aren’t following rules.”

National Digital Ethics Public Panel Insight Report, 2021, P. 56

Why are Ethical Limits to Monitoring and Surveillance Important?

Monitoring and surveillance are already prominent and broadly accepted concepts in our society. However, as technology continues to provide new ways of embedding monitoring and surveillance into our daily interactions and activities, it can be more difficult to draw the line between what is acceptable, and unacceptable. Digital surveillance includes physical monitoring and tracking, such as tracking cars and automated CCTV systems – as well as monitoring in the digital space too, like online shopping habits or parents monitoring their children’s phone usage.

New technologies have enabled more invasive surveillance in the workplace. We have seen examples of ‘micro-management’ through the use of keystroke counting or screen time monitoring, which has left people feeling demotivated and under pressure to be constantly online and working (National Digital Ethics Public Panel Insight Report, 2021). Collecting data in this way can often feel like it is monitoring without good reason, can suppress productivity and has the potential to breach privacy and data protection laws. This may erode trust in workplace relationships, where employees no longer feel that they have any autonomy over their own activities and workloads. If the data collected in these cases is not useful, and the technology is not being used in a targeted way, then it would seem that it is being deployed merely because it is there and available, rather than for a clear and proportionate reason. Limits, determined through public debate, around the use of monitoring and surveillance technology will help to ensure that its use is not intrusive, manipulative and unnecessary. Additionally strengthening regulation, legislation and oversight are levers available to mitigate the concerns on this evolving topic.

“Problems could arise when data is collected and used for reasons not laid out and that is where it becomes unacceptable”

National Digital Ethics Public Panel Insight Report, 2021, P. 56

Another example of digital monitoring and surveillance is the physical surveillance of public movements. Widespread anonymised data can be collected to help inform community level strategies for issues such as crime-prevention, public health or environmental planning and protection. However, often this is carried out in ways, which are not transparent or publicly understood. For this type of activity to be deemed acceptable, it is important that organisations communicate clearly, why there is a need to collect this data, and how the technology will be used to achieve this.

Additionally the practice of monitoring and collecting online data – dataveillance – such as monitoring credit card transactions, social networks, emails etc. creates a digital footprint of an individual’s activity. While, again, there can be benefits to dataveillance, such as tracking fraud, there are a number of concerns around the practice too. People are not often aware that this type of surveillance is happening, highlighting a lack of transparency. Dataveillance can also compromise online anonymity.

Highly invasive and intrusive levels of surveillance cannot become the norm. There needs to be a balance – what can we deploy that is useful and proportionate and helps to keep society safe and flourishing; and how can we better limit, monitor and scrutinise surveillance technology and data collection so that it is used in ways that are fair, safe and transparent?

Privacy by Design and Default

High levels of surveillance and monitoring can be a violation of personal privacy. Privacy includes control over one’s personal data and the spaces we occupy (physical or digital), from intrusion by public bodies, private organisations, employers and even within the family. Privacy is intimately linked to the concept of ‘dignity’. Some level of privacy protection is provided by law, but there are many exceptions, and in some areas, such as rights of teenagers to privacy from their parents, there is little social consensus. There is a clear need to make sure that a digitally inclusive society can be achieved without citizens needing to worry about their safety and privacy in digital or physical spaces. Greater levels of transparency and awareness of the variety of methods of digital monitoring, and how monitoring can be controlled, and help to build public trust in the systems that people need to engage with.

Protection of individual privacy is a priority in the ethical deployment of monitoring and surveillance technology. Without citizens feeling confident that their data is safe, it will become increasingly difficult to maintain a sustainable relationship between public trust and the use of data-driven technologies.

To ensure that personal privacy is protected as far as possible, there is a need to establish ethical limits on the use of surveillance and monitoring technologies by:

  • Ensuring there is no unfair use of monitoring that would invade personal privacy, by only using surveillance technologies with compelling reason and oversight
  • Restricting the use of surveillance and monitoring technologies until they can be independently verified to meet high standards of effectiveness and accuracy, to be subject to proper governance, to address equality issues and to address authoritarian uses
  • Additionally, the following principles, currently protected as legal rights under the GDPR need to be protected under any successor legislation, as without these principles a significant degree of data collection and processing could not be carried out in an ethically sound manner: Minimising the amount of information people are expected to provide when accessing goods and services
  • Building privacy by design into apps and services as standard
  • Providing the highest level of privacy settings as the default option
  • Providing and inclusive broadband and mobile infrastructure.

Case Study:

Cybersecurity

Dr. Markus Christen

Cybersecurity is a major area of growth and investment in Scotland, and the Scottish Government has been proudly promoting this sector as an enabler of prosperity and jobs.

The development of tools for strengthening personal and corporate privacy-protection, building cyber-resilience against threats presented by criminal or state actors, supporting financial or supply chain accountability and helping to tackle serious crime may, on the one hand be regarded as an ethical duty.

At the same time, the cybersecurity sector is also heavily investing in the development of surveillance and forensic tools for purposes such as law enforcement, border control, national security and behavioural monitoring, which can challenge public expectations for ethical, proportionate, transparent, fair, inclusive and accountable digital practice.

Investments in Scottish cybersecurity/forensics companies are partly based on the prospect of selling such technologies/services abroad. Some of these may be regarded as ethical exports, since they may help to guard vital public services or secure the assets and private information of citizens globally. Yet even the most well-intentioned technologies may be misused in the wrong hands; for example, there has been much coverage of Israel’s success in cybersecurity innovations, yet we are seeing evidence of these being used in domestic, corporate and governmental spyware, including by authoritarian governments or geopolitical adversaries of the UK. Scotland can make the most of a cyber-Scotland and avoid the potentially harmful effects of misuse and misappropriation by following three layers of action:

Government and legal: obtain an overview of the often-fragmented legal landscape, including gaps and conflicts, across the legislation areas of network and information security measures, electronic communications, including privacy and data protection issues and cybercrime.

Guidelines and soft law: Legislation will not be able to cover all cases/issues that will emerge in real life. Thus, what is needed is that companies themselves create a culture of awareness for such ethical and legal issues including procedures for how to operate (and deliberate) in case of unclear legal guidance. The process of generating guidelines within a company could be an instrument to enforce such a cultural change. Training of professionals on all levels:

It is well known that cybersecurity is a 'wicked problem' that cannot be solved but only be managed. Thus, knowledge regarding cybersecurity should include a broad spectrum of competences (certainly with a specified focus depending on the profession). What we consider relevant is that ethical, legal and social aspects of cybersecurity should be part of the training of professionals

Case Study:

Domestic Abuse and Data and Digital Technologies

Dr. Katherine O’Keefe

The use of digital technologies to facilitate domestic abuse mirrors many of the concerns revealed in the Public Panel about surveillance and technology. The increasing integration of digital connected devices into the home life impacts privacy generally but is of particular concern in the context of domestic abuse or intimate partner violence. Where the legal and ethical frameworks often used to raise concern regarding the impacts of digital technologies and surveillance on our rights to privacy, autonomy often model the threats and harms as external to the home and look for protection of the home from government, industry, or external criminal threats, the same threats to privacy, dignity, and autonomy can occur within the domestic space, in the context of intimate partner violence. This is reflected in the focus of legal protections. The UK Data Protection Acts and GDPR limit the scope of protections, exempting 'domestic' or 'household' use of personal data from requirements for compliance.

The impact of domestic abuse in Scottish life is wide-ranging and significant. According to research done by the Scottish Government 62,907 incidents of domestic abuse recorded by the police in 2019/20,[15] and the Coronavirus crisis saw a 'shadow pandemic', with an increase in reported domestic violence as well as increased threats and pandemic specific tactics of abuse during lockdowns.[16] “Some services observed increases in online stalking and harassment behaviours.”[17] According to Scottish Women’s Aid, “For women not living with their abuser, lockdown meant that their abuser knew they would be at home, increasing the abuser’s opportunities for stalking and continued harassment. The reliance on technology during lockdown to maintain social contact and for work also provided opportunities for abusers to misuse that technology to continue the abuse.” [18]

Many emerging digital devices and connected services have been weaponized by abusers as tools for surveillance or stalking (facilitated by GPS, webcams, spyware, or abusive uses of apps and phone functions), as well as control of 'smart' home IoT technologies such as smart meters, voice assistants, and locks. These can impact victims’ autonomy and be used as methods of coercive control and psychological abuse, to establish power over victims and harass them as well as for surveillance.

Technology facilitated abuse in the context of domestic abuse or gender-based violence is not necessarily fully recognized in the way domestic violence is recorded and countered in the justice system, though they are likely to fit into categories of “threatening or abusive behaviour or stalking” offences that constitute 88% of breach of the peace-type convictions recorded against abusers in the statistics recorded by the police in Scotland - 2018/19 (5). Additionally, the types of harassing and coercive behaviour for such digital abuse is intended to “cause the partner or ex-partner to suffer physical or psychological harm” such as fear, alarm, or distress. This is recognized in The Abusive Behaviour and Sexual Harm (Scotland) Act 2016 as an aggravation of an offense (Abusive Behaviour and Sexual Harm (Scotland) Act 2016, 1 (2)).

The harms of technology-facilitated abuse are significant, and part of a range of tactics used by perpetrators.

Restriction of access and monitoring of mobile phones has become a significant element of coercive control, as well as stalking behaviour. Abusers may misuse general-purpose software or operating system features or install more purpose specific spyware on phones. This can include changing passwords to block or control access to communications, as well as access to bank accounts and monitoring finances, using location tracking to surveil or stalk victim-survivors, and enabling spyware on phones. One example of psychological abuse often employed against survivors is harassment using payment apps, by repeatedly sending small payment amounts to constantly remind victims and survivors that they are within the abuser’s reach.

Technology facilitated abuse, particularly in the context of smartphones and 'smart home' connected devices and systems integrated into the functioning of a home, raises specific privacy and security concerns for such sensitive situations and introduces new threats and harms. A number of digital technologies may be used by abusers as surveillance mechanisms to stalk victims and monitor their activity throughout the day as a tool of coercive control. This surveillance affects victims/survivors psychologically, impacting their dignity, privacy, and autonomy. The Scottish Government’s reported that a commonly used phrase victims used was that they felt like “sitting ducks”, as their abusers knew where they were at all times”[19].

This can include many 'internet of things' (IoT) devices as well as mobile phones. Webcams and home assistants, such as Alexa or Google Home devices, may be used for surveillance, or to control connected thermostats, lights, locks, and other elements of the home, connected devices, or wearables. The effects of this weaponized use are not only limited to the possible physical effects of the literal updated 'gaslighting', but the psychological effects of the threat whether the threatened control is possible or realized.

There has been increasing recognition of the harm caused by non-consensual publishing of intimate images or 'revenge porn' as abuse and harassment. It is one of a number threatening and abusive uses of social media. The design of social media networks makes it difficult for abuse survivors to control their privacy and cut their abusers off from information about them, as their privacy is impacted by the social media profile privacy settings of everyone they know. Even if they block an abuser from all of their social media, they cannot ensure that everyone in their network also blocks information about them. Technologies such as facial recognition and automated tagging aggravate this risk.

At a government and policy level, support for the programmes and organisations working with victims and survivors of domestic abuse and gender based violence should consider the digital and physical abuse holistically. Similarly, the framing of legal protections in relation to data could take into account the gaps in protections resulting from 'domestic use' exemptions to data protection legislation. Support offering specialized expertise and cyber security support for survivors will likely be increasingly needed. Having a centralized government cybersecurity resource devoted to this, perhaps as an aspect of the Scottish cyber strategy, would also provide insight and statistics into the prevalence and trends in technology-facilitated abuse. Additionally, policies supporting better understanding of threats through Higher Education could offer another opportunity to help emerging developers understand the social context in which their products will affect people, individually and socially.

Contact

Email: digitalethics@gov.scot

Back to top