Review of emerging technologies in policing: findings and recommendations

Findings and recommendations of the Independent advisory group on new and emerging technologies in policing.


3. Findings and Discussion

In this section, the findings of our study are discussed in relation to each of the five research questions posed in turn.

3.1: The Social and Ethical Implications of Different Types of Emerging Technologies in Policing

The types of emerging technologies that the literature discussed spanned across three broad categories of technology type. These are:

1. Electronic Databases (mentioned in 59 out of a total of 173 documents)

2. Biometric Identification Systems (55 out of 173 documents)

3. Electronic Surveillance and Tracking Devices (52 out of 173 documents)[4]

The social and ethical issues associated with each of the different, specific types of technology that fall within each category are detailed in the following sub-sections.

3.1.1: Electronic Databases

According to Bray (2017), data can be regarded as encoded information about one or

more target phenomena’ (such as objects, events, processes, or persons). Nowadays, data is digitally rather than analogically encoded. Data is socially and ethically important for three reasons: 1) The process of collecting and organising data requires making

assumptions about what is significant or useful, meaning that in practice no dataset is ever fully objective or neutral; 2) Digitally encoded data allows information to be stored, managed, duplicated, shared, manipulated, and transformed more efficiently than before; and 3) Analysis of electronic data enables the processing of large amounts of data to obtain novel insights that may otherwise be inaccessible (Whittlestone et al., 2019). Electronic database technologies represent one form of continually evolving technology that are used in contemporary police practice. Electronic databases store, organize, and process information in a way that makes it easy to perform searches and analyses.

Fifty-nine of the 173 documents reviewed from the final sample discussed the use and management of electronic databases within the context of the implementation and use of emergent technologies in policing practice. The following specific types of electronic database technologies and uses were discussed in the literature:

  • Data sharing and third-party data sharing platforms, including for public-private organisation data sharing purposes (17 of 59 documents)
  • Community policing application data (16 of 59 documents)
  • Data pulling platforms (2 of 59 documents)
  • Social media application information (14 of 59 documents)
  • Use of open-source data (4 of 59 documents)
  • Vulnerable population databases and datasets (9 of 59 documents)
  • DNA databases (4 of 59 documents)
3.1.1.1: Data Sharing and Third-Party Data Sharing Platforms

Seventeen of the 59 articles and reports that discussed electronic databases specifically addressed data sharing and third-party data sharing platforms (Asaro 2019; Babuta 2017; Babuta and Oswald 2020; Clavell 2018; Custers 2012; Henman 2019; McKendrick 2019; Neiva et al., 2022; Neyroud and Disley 2008; Skogan and Hartnell 2008; Sanders and Henderson 2013; Holley et al., 2020; Weaver et al. 2021, National Analytics Solutions 2017; Vilender et al. 2021). The social and ethical issues identified and discussed by these 17 articles fall into five thematic areas:

  • Safety of Information Held
  • Human rights and privacy
  • Lack of standardisation and accountability
  • Differences in organisational practice
  • Bias embedded in data, data organisation and data sharing processes
a) Safety of Information Held

Eleven documents discuss the importance of the safety of information storage and access in relation to electronic databases (Aston et al., 2021; Clavell 2018; Custers 2012; Leslie 2019; Neiva et al., 2022; Sanders and Henderson 2013; Vilender et al. 2021; Neyroud and Disley 2008; Babuta 2017; McKendrick 2019; Weaver et al., 2021). For example, Clavell (2018), using the example of Geographic Information System data in urban policing, discusses how ensuring the safety of data and preventing data access breaches requires organisations to confront a series of challenges relating to the organizational structures that will be used to manage them and their technical capacities and expectations in order to prevent the risk of increased victimisation, inequalities, or inefficiency. Leslie (2019) explores the risk of ‘data poisoning’, which is a type of adversarial attack that involves malicious compromise of data at the point of collection and pre-processing, and which can result from instances where data collection and procurement involve unreliable or questionable sources, including social media data and third-party curation processes. Sanders and Henderson (2013) examine the use of data sharing via computer aided dispatch systems and record management systems in relation to violent crime in Canada and the risks posed by potential beaches of information security. Similarly, Neiva et al. (2022) examines Big Data and the interoperability of multiple datasets, for criminal investigation and discusses the risks of the sharing of Big Data in relation to the potential for enforcing genetic discrimination with DNA databases, as well for privacy and human rights. Aston et al., (2021) discuss the role of data protection and security in relation to building public confidence and facilitating information sharing with the police online, as well as in face-to-face interactions.

b) Human Rights and Privacy

Eight of the 15 documents reviewed discuss the social and ethical implications of electronic databases and third-party information sharing systems by referring to the risks posed to privacy and human rights and the considerations that need to be made to mitigate these risks (Asaro 2019; Neiva et al., 2022, Holley et al., 2020; Vilender et al., 2021; Baburta 2017; Neyroud and Disley 2008; McKendrick 2019; Sanders and Henderson 2013). For example, Holley et al. (2020) discusses how the sharing of information via cyberspace technologies to third parties and between public and private sector organisations creates a decentralisation and fragmentisation of the security of personal information with greater control being given to private security governance professionals. Neyroud and Disley (2008) examine the impacts of electronic databases in relation to the concepts of fairness and legitimacy, arguing that the effectiveness of new technologies for detecting and preventing crime should not, and cannot, be separated from ethical and social questions surrounding the impact which these technologies might have upon civil liberties. They argue that this is due to the close inter-relationship between the effectiveness of the police and public perceptions of police legitimacy—which may potentially be damaged if new technologies are not deployed carefully. The authors argue that strong, transparent management and oversight of these technologies are essential, and suggest some factors to which a regime of governance should attend, including integrity and reliability of the technology, ensuring alignment between purpose and use in deployment of the technology, transparency in governance, and ensuring public confidence in the technology. Similarly, Vilendrer et al. (2021), use the example of data sharing from mobile applications between first responders during the Covid-19 pandemic in San Francisco in the United States to discuss concerns around the need for securing personal information. In contrast to the other documents that focus on the risks to privacy and human rights, McKendrick (2019) discusses how the storage and sharing of data in relation to countering terrorism may not have a deleterious effect on human rights, owing to greater abilities to protect citizens’ right to life and the need to safeguarding against broader misuse of related technologies and data. McKendrick (2019) also argues broader access to less intrusive aspects of public data and direct regulation of how those data are used – including oversight of activities by private-sector actors – and the imposition of technical as well as regulatory safeguards may improve both operational performance and compliance with human rights legislation. However, they also note that it is important that measures proceed in a manner that is sensitive to the impact on other rights such as freedom of expression, and freedom of association and assembly.

c) Lack of Standardisation and Accountability

Five of the documents reviewed refer to how social and ethical concerns may arise as a result of a lack of standardisation between parties over data use and access and a lack of accountability (Henman 2019; Babuta and Oswald 2020; Sanders and Henderson 2013; Babuta 2017). For example, Henman (2019) explains that visions of greater efficiency, improved quality of service delivery and open and accountable government are often not achieved and how policy and administrative principles may be undermined due to increasing fragmentation and a lack of standardisation over procedures. Similarly, Babuta and Oswald (2020) note that there remains a lack of organisational guidelines or clear processes for scrutiny, regulation, and enforcement. They also add that this should be addressed as part of a new draft code of practice, which should specify clear responsibilities for policing bodies regarding scrutiny, regulation, and enforcement of these new standards.

d) Differences in Organisational Practices

Three of the 15 documents reviewed examine the organisational barriers to information sharing and integrated policing (Henman 2019; Sanders and Henderson 2013; Baburta 2017). For example, Sanders and Henderson (2013) explore how differences in organisational practices can result in digital divides, leading to problems with data integration and a lack of standardisation in practice.

e) Bias Embedded in Data, Data Organisation and Data Sharing Processes

Three documents discussed the issue of the embedding of bias within electronic databases and the need for users to adopt a critical perspective towards the data being held (Weaver et al. 2021; Babuta 2017; Vilender et al., 2021). For example, Weaver et al. (2021) looks at tech-enhanced data sharing between police officers and psychiatric personnel for enhancing police referrals of individuals experiencing suicide crises into treatment and discuss the need for information to be both held and shared in a culturally sensitive way, acknowledging that data input and sharing can reflect the biases held by individuals. Similarly, Babuta (2017) acknowledges how data may contain existing biases and may reflect the over or under policing of certain communities and racial bias, which are then reproduced in the generated outcomes of the application of those datasets. Babuta (2017) also discusses how individuals from disadvantaged sociodemographic backgrounds are likely to engage with public services more frequently, meaning the police often have access to more data relating to these individuals, which may in turn lead to them being calculated as posing a greater risk in the application of that data.

3.1.1.2 Community Policing Applications

Sixteen of the 59 documents that examine electronic databases in relation to the social and ethical issues surrounding the use emerging technologies, discuss data storage from community policing applications (apps) and their use (Aston et al., 2021; Bloch 2021; Brewster et al., 2018; Davis and Garb 2020; Dunlop et al., 2021; O’Connor 2017; Van Eijk 2018; Weaver et al., 2021; Henman 2019; Ellis 2019; Goldsmith 2015; Hendrix et al., 2013; Gaelle and Joelle 2018; Hendl et al. 2020; Henman 2019; Lum et al., 2017). Of these 16 articles, three themes emerged. These were:

  • Risks of enhancing racial inequalities
  • Issues of inclusion/exclusion and social and technological capital
  • Maintaining public trust
a) Risks of Enhancing Racial Inequalities

Two of the documents reviewed specifically address the issue of racism in relation to the storage of data from community policing applications (Bloch 2021; Hendrix et al. 2013). Bloch (2021) examines the use of the Nextdoor app as a means of community-instigated policing, which he argues embeds unchallenged racist attitudes in neighbourhood monitoring data. Hendrix et al. (2013) discuss community policing apps as part of broader community and hot spot policing strategies, noting that this can potentially result in enhancing hot spot policing and data gathering of particular geographic areas, which in turn, can potentially exacerbate inequalities in data of ethnic minority groups.

b) Issues of Inclusion/Exclusion and Social and Technological Capital

One article discusses how the use of community policing application data can further the risk of exclusion between police and certain population groups owing to differences in the social and technological capital of those using the app and those being subject to monitoring (Brewster et al., 2018). Brewster et al. (2018) explains that the use of technologies in community policing represent a move away from reactive policing models towards those which establish a proactive philosophy, responsive to the wants and needs of communities. They also intend to help improve the relationship and level of engagement between the police and the communities they serve. However, as they are often used more to disseminate information rather than to support communications in problem-solving, their effectiveness has been limited. In addition, they can be argued not to enhance public participation or community-police relations, but results in a widening gulf in participation and community-police relations owing to differences in social and technological capital between population groups, which results in inequalities between those who provide information and those whose information is being provided being recorded.

c) Maintaining Public Trust

Six of the documents reviewed discuss the issue of trust between community members and police in relation to the electronic information from community policing applications (Aston et al., 2021; Bloch 2021; Van Eijk 2018; O’Connor 2017; Ellis 2019; Lum et al 2017). For example, van Eijk (2018) discusses the coproduction of data in community-police applications and explores how the perceptions that citizens and professionals hold about each other’s aims and engagement impact upon willingness to share information. Van Eijk (2018) argues that for greater two-way collaboration and trust to occur, transparency is needed about the aims of the engagement and how the data will be held. Lum et al. (2017) also explain that a lack of transparency may limit its potential to improve police-community relationships.

In contrast, O’Connor (2017) argues that by allowing people to communicate directly with the police as well as for information to be shared about matters like safety/traffic, community, and police activity in addition to specific matters of crime and investigation, these apps can help enhance trust between the police and community. However, they also stress that consideration must be given to the visibility and storage of information to avoid damaging police-community relations. Aston et al., (2021) examine the issue of public trust and confidence amongst members of the public from minority groups in relation to online data in relation to online community policing data and show that key concerns raised surrounded the anonymity and privacy of information, risk of abuse of personal data and not having the option of opting out of having personal data stored.

3.1.1.3: Data Pulling Platforms

Only two of the documents within the sample explored the social and ethical issues associated with data pulling platforms (Ellison et al., 2021; National Analytics Solutions 2017). Ellison et al (2021) explores how the utilisation of big data and its integration with administrative and open data sources and platforms can reflect inequalities in terms of police resources, which in turn influences its effectiveness and its outcome in terms of the interplay between policing demand and deployment. The National Analytics Solutions 2017 report explores the ethical implications associated with data pulling platforms and argues that one problem with these are that prisons, probation, education and health sectors, as well as civil society organisations, private industry, and communities have different cultures and practices regarding the collection, sharing, processing and use of different types of data, which in turn, can create shifts in distributions of power which then manifests in terms of data availability.

3.1.1.4: Social Media Platforms and Data Storage

Fourteen of the 59 documents that discuss electronic information and databases discuss the social and ethical implications associated with the collection, storage, management, and use of social media data in policing (Bullock 2018; Ellis 2019; Fussey and Sandhu 2020; Goldsmith 2015; Hendrix et al., 2019; Meijer and Thaens 2013; Oh et al. 2021; Todd et al., 2021; Walsh and O’Connor 2019; Williams et al., 2021; Williams et al, 2013; Veale 2019; Brabuta 2017; Strom 2017). Particular issues identified were:

  • Issues of lack of alignment in organisational culture
  • Issues surrounding legitimacy of police action
  • The management and use of sensitive information
  • Risks of enhancing actual and perceived social injustices
a) Lack of Alignment in Organisational Culture

Three of the documents discussed how the organisational culture affects how the collection, storage, management, and use of social media data is undertaken, exploring how lack of cooperation and standardisation between police departments and between police and third-party organisations as to how this should be conducted is a cause for concern (Bullock 2018; Hendrix et al., 2019; Meijer and Thaens 2013). For example, Bullock (2018) explains that social media has not helped to facilitate interaction between police and communities in the way that was desired in England owing to how the uses of social media data are mediated by existing organisational and occupational concerns of police departments. Similarly, Hendrix et al. (2019) argues that police use of social media data lacks clear guidance as to how it fits within its guiding philosophy and operational goals. Meijer and Thaens (2013) explore how a lack of clear government policy or guidance for the collection, management and use of social media data poses a potential ethical risk as well as hinders its potential effectiveness in police practice.

b) Legitimacy of Police Action

Two articles discuss how the availability of social media data can be used to question and negotiate the legitimacy of police actions (Ellis 2019; Goldsmith 2015). Ellis (2019) examined the impacts of digital media technologies on police and lesbian, gay, bisexual, transgender, intersex, and queer (LGBTIQ) community relations in Sydney by examining how a viral video of police excessive force filmed after the 2013 Sydney Gay and Lesbian Mardi Gras parade raised questions about issues of legitimacy and procedural justice. She then discussed how social media data can be used to pressure the police to account for their actions and can lead to public questioning over what can be considered to be legitimate boundaries of police practice. Goldsmith (2015) examines the potential problems for police reputation, operational effectiveness and integrity of the criminal justice system that can arise from off-duty use of social media by police officers and the potential harm this can result in for community-police relations.

c) Management and Use of Sensitive Information

Four articles discuss the issues associated with sensitive information obtained specifically through social media data (Fussey and Sandhu 2020; Todd et al., 2021; Oh et al., 2021; Walsh and O’Connor 2019). For example, Fussey and Sandhu (2020) discuss the use of social media information in police surveillance activities as part of information gathering, digital forensics and covert online child sexual exploitation investigations, and the ethical issues associated with extended surveillance and storage of data. Similarly, Todd et al. (2021) in their study of social media in relation to online stalking, domestic violence and homicide raised questions about the digital footprints of victims and perpetrators and their use criminal investigations and in responding to victims.

d) Risks of Enhancing Actual and Perceived Social Injustices

Two articles explored how the management and use of social media data posed a potential risk of exacerbating actual and perceived social injustices and tensions between police and community members (Williams et al., 2013; Walsh and O’Connor 2019). Williams et al. (2013) discuss how the Cardiff Online Social Media ObServatroy (COSMOS) affords users with the ability to monitor social media data streams for signs of high tension which can be analysed in order to identify deviations from the norm' (levels of cohesion/low tension), may potentially risk enhanced surveillance of particular community groups, which may negatively affect relations between police and these communities. Walsh and O’Connor (2019) explore how social media provides unprecedented capacities to monitor the police and expose, circulate, and mobilize around perceived injustice, whether brutality, racial profiling, or other forms of indiscretion.

3.1.1.5: Open-Source Data

Only four of the documents explored the social and ethical implications of the use of open-source data in relation to emerging technologies (Clavell et al., 2018; Ellison et al., 2021; Egbert and Krasmann 2020; Kjellgren 2022). Of these, one (Clavell et al. 2018) explores the social impact of open-source intelligence data by exploring how this technology can result in increased victimisation if not adequately managed. Egbert and Krasmann (2020) explore how access to open-source data may ‘drive’ predictive policing strategies and sometimes unnecessary pre-emptive police action, while Ellison et al (2021) discusses the use of open data sources in relation to police demand. Finally, Kjellgren (2022) has considered how the application of big data analytics to open-source data associated with sex work and human trafficking, can disguise simplified conceptualisations of the problem leading both to over-policing and over-claiming regarding the scale of the issues being addressed.

3.1.1.6: Vulnerable Population Databases and Datasets

Nine of the documents reviewed explore the social and ethical implications of vulnerable population data in relation to developments in technology use in policing (Hendl et al., 2020; Lumsden and Black 2020; Malgiari and Niklas 2020; Powell and Henry 2018; Wolfe 2021; Brabuta 2017; Strom 2017; Whittelstone et al., 2019, National Analytics Solutions 2017). Three key issues were identified in the literature. These were:

  • Opportunities and risks associated with surveillance of vulnerable populations
  • Issues of human rights and justice
  • The need for greater consultation and communication with vulnerable groups as to how data is stored and used
  • Lack of guidance and prioritisation for data collection and management
a) The Surveillance of Vulnerable Populations

Six of the documents discuss the opportunities and risks associated with the potential of the enhanced surveillance of vulnerable populations through collection of and access to data on vulnerable populations (Hendl et al., 2020; Whittelstone et al., 2019; Brabuta 2017; Strom 2017; Powell and Henry 2018; Wolfe 2021). Hendl et al. (2020) explains that in digital surveillance activities, vulnerable subpopulations pay a higher price for surveillance measures and that there are concerns that improperly restricted data availability could lead to the employment of disproportionate profiling, policing, and criminalization of marginalized groups. Wolfe (2021) examines the ethical issues associated with electronic data pertaining to missing people in police activity, while Whittelstone et al. (2019) explores the positive and negative implications of the enhanced holding of data of disadvantaged and underrepresented groups, such as women and people of colour, or vulnerable people such as children and older people, and argues that greater consideration needs to given to what tensions between values are more likely to arise and how they can be resolved. Brabuta (2017) discusses the implications of data storage and analytics that make it possible for police forces to use past offending history to identify individuals who are at increased risk of reoffending, as well as using partner agency data to identify individuals who are particularly vulnerable and in need of safeguarding. Storm (2017) discusses the issue of protecting vulnerable people from harm and explains how a lack of priority or guidance as to how and when information should or could be shared hinders effective management and deployment of vulnerable population data. Finally, Powell and Henry (2018) explore the use of vulnerable population data in relation to: (1) online sexual harassment; (2) gender and sexuality-based harassment; (3) cyberstalking; (4) image-based sexual exploitation (including revenge pornography'); and (5) the use of communications technologies to coerce a victim into an unwanted sexual act and address the challenges and promises of law enforcement in this area.

b) Issues of Human Rights and Justice

Only one of the documents reviewed specifically addresses the issues of human rights and justice in relation vulnerable population data (Malgieri and Niklas 2020). Malgieri and Niklas (2020) explore how discussion and decision-making about vulnerable individuals and communities and the use of their data spread from research ethics to human rights. They explore how the development, deployment and use of data-driven technologies can pose substantial risks to human rights, the rule of law and social justice, and how the implementation of such technologies can lead to discrimination through the systematic marginalisation of different communities and the exploitation of people in particularly sensitive life situations. They argue for the special role of personal data protection and call for a vulnerability-aware interpretation. They also outline how the notion of vulnerability can influence issues of consent, Data Protection Impact Assessment, the role of Data Protection Authorities, and the participation of data subjects in the decision making about data processing.

c) The need for Greater Communication with Vulnerable People as to how Data is Stored and Used

Two of the documents discuss the need for greater consultation and communication with vulnerable groups as to how data is collected, stored, and utilised in police practice, as well as for greater clarity around the issue of access to personal data (Malgieri and Niklas, 2020; Lumsden and Black, 2020). For example, Lumsden and Black (2020) discuss the importance of ensuring that data and services are responsive to the needs of D/deaf citizens and argue that when designing police services and technologies, the focus must include the needs of D/deaf citizens.

d) Lack of Guidance and Prioritisation for Data Collection and Management

Two documents discuss the current lack of guidance and priorities for the collection, management, and use of vulnerable population data (Brabuta 2017; Strom 2017). Strom (2017) argues there should be an assessment to establish priorities and discusses how there are legal constraints on data processing and that some of these apply differently to law enforcement than to other parts of government services which are likely to be involved in data-sharing processes with vulnerable population in relation to law enforcement, such as social housing services. Babuta (2017) also explains that at present data-sharing deficiencies mean that the police’s understanding of vulnerability is somewhat one-dimensional and argues that a clear decision-making framework should be developed at the national level to ensure the ethical use of vulnerable population data.

3.1.1.6: DNA databases

Four of the documents referred to social and ethical concerns associated with electronic databases storing information pertaining to human DNA (Custers and Vergouw, 2015; Gaelle and Joëlle, 2018; Neiva et al., 2022; Rigano, 2019). For example, Rigano (2019) explains that DNA analysis produces large amounts of complex data in electronic format that requires storage management. Gaelle and Joelle (2018) explore the production of ethical norms regulating biomedical practices and the importance of these for police work and management of genetic data.

3.1.2: Biometric Identification Systems

Fifty-five of the 173 documents in the final sample discussed the social and ethical implications of biometric identification systems in policing practice. The following specific types of biometric identification systems were discussed in the literature:

  • Facial recognition technology, including ‘remote’ facial recognition technologies (17 out of 48 documents)
  • Artificial intelligence, including AI smart sensors, automated algorithm processes and decision-making tools, and emotional recognition technologies (21 out of 55 documents)
  • Voice pattern analysis tools (2 out of 55 documents)
3.1.2:1: Facial Recognition Technologies

According to Chowdhury (2017), a number of terms are used to refer to facial recognition technology. These include automated facial recognition, live facial recognition (LFR) and facial recognition processes. Facial recognition technologies analyse an individual’s face to determine identification in real time by examining facial patterns such as the distance between eyes and length of the nose in order to create facial templates and compare these to templates held on records. If the comparison results in a match a confidence score is produced. Thresholds for how strong or weak a match are set by the entity deploying the system (Chowdhury 2017). The matching process can be undertaken on a one-to-one matching basis where the system confirms that an image matches a different image of the same person in a record database or on a one-to-many basis where one image is compared to other records within a database.

Seventeen documents discussed the social and ethical implications associated with the implementation and use of facial recognition technologies (Almeida et al., 2021; Babuta 2017; Babuta and Oswald 2020; Bradford et al., 2020; Bragias et al., 2021; Bromberg et al., 2020; Chowdburg 2020; Fussey et al., 2021; Fussey and Murray 2019; Hood 2020; Keenan 2021; McGuire 2021; McKendrick 2019; National Physical Laboratory and Metropolitan Police Service 2020; Smith and Miller 2022; Urquhart and Miranda 2021; Williams 2020). Four key social and ethical issues were identified from this literature. These were:

  • Trust and legitimacy
  • Risk of enhancing inequalities for marginalised groups
  • Privacy and security
  • Lack of standardised ethical principles and guidance
a) Trust and Legitimacy

Five of the documents discuss how issues of trust and legitimacy manifest in relation to facial recognition technologies (Almeida et al., 2021; Bradford et al., 2020; Bragias et al., 2021; McGuire 2021; Williams 2020). Bradford et al., (2020) explores the results from a London-based study exploring public responses to Live Facial Recognition technologies which enable police to conduct real-time automated identity checks in public spaces. They argue that public trust and legitimacy are important factors in the acceptance and rejection of these technologies and highlight that high levels of trust and perceived legitimacy about the use of these technologies help to alleviate privacy concerns about their use. McGuire (2021) explains that perceptions of the potential misuse of these technologies and concern about the denial of rights can threaten the viability of policing and lead to questions about the limits of the automation of policing. Bragias et al. (2021) explain that although facial recognition technologies offer a fast, efficient, and accurate way of identifying criminals, the public is often sceptical about how the police will use this technology and for what particular purposes. They argue that if police use of FRT is perceived as illegitimate, police-citizen relationships may deteriorate, especially for marginalised communities.

b) Risk of Enhancing Inequalities for Marginalised Groups

Five of the documents discuss how the use of facial recognition technologies pose particular social and ethical issues for police practice and relations with marginalised communities (Bragias et al. 2021; Chowdhury 2020; Hood 2020; Urquhart and Miranda 2021; Williams 2020). For example, Urquhart and Miranda (2021) discuss the results of an empirical and legally focused case study of live automated facial recognition technologies in British policing and discuss police concerns about how they may affect or be affected by anti-discrimination laws and how EU AI Regulation makes LFR a prohibited form of AI. Hood (2020) explores the integration of facial recognition into police body-worn camera devices and discusses the political dangers of these technologies. Hood argues that these technologies risk reinforcing normative understandings of the body and explores how facial recognition surveillance devices pose enhanced risks for marginalized groups and explains that body worn cameras with facial recognition devices present a number of socio-political dangers that reinforce the privilege of perspective granted to police in visual understandings of law enforcement activity and risk reinforcing racial marginalization. Similarly, Chowdhury (2020) argues that facial recognition technologies represent a form of monitoring technology which has a long history of being deployed primary against people of colour. Chowdhury (2020) explains that people of colour are at substantially at risk of being over-policing and discusses how improvements in accurate facial recognition technologies will likely still exacerbate racial inequalities because it is highly likely that the technology will be disproportionately used against people and communities of colour, and uses the example of the London trials of this technology by the Metropolitan Police at the Notting Hill Carnival to highlight the inequalities of outcomes and to show the dangers resulting from failure to carry out an equality impact assessment before deploying this form of technology.

c) Privacy and Security

Eight of the documents reviewed discuss issues pertaining to privacy and personal security in relation to the deployment of facial recognition technologies (Almeida et al., 2021; Bragias et al., 2021; Keenan 2021; Smith and Miller 2022; Urquhart and Miranda 2021; Chowdhury 2020; Fussey and Murray 2019; National Physical Laboratory and the Metropolitan Police Service 2020). For example, Keenan (2021) explores how in the case of R (on the application of Bridges) v Chief Constable of South Wales Police, the Court of Appeal held that the deployment of live automated facial recognition technology (AFR) by the South Wales Police Force (SWP) was unlawful because it violated the right to respect for private life under Article 8 of the European Convention on Human Rights because it lacked a suitable basis in law. They also explored how the Data Protection Impact Assessment conducted under section 64 of the Data Protection Act 2018 failed to assess the risks to the rights and freedoms of individuals processed by the system. Similarly, Smith and Miller (2022) explain that biometric facial recognition technologies which involve the automated comparison of facial features carry significant privacy implications that require law and regulation.

d) Lack of Standardised Ethical Principles and Guidance

Two of the documents explore the need for standard ethical principles and guidance to be introduced in order to help mitigate the social and ethical risks associated with facial recognition technologies (Babuta and Oswald 2017; Smith and Miller 2022). For example, Babuta and Oswald (2017) explain that there remains a lack of organisational guidelines or clear processes for scrutiny, regulation, and enforcement of biometric identification systems, including facial recognition technologies.

3.1.2.2: Artificial Intelligence

The term Artificial Intelligence (AI) is used to refer to any technology that performs tasks that might be considered to count as ‘intelligent’ in the sense that they replicate complex human cognitive processes and abilities in machines. These technologies can be used to optimise processes and can be designed and programmed to operate autonomously (Chowdhury 2019).

Twenty-one of the documents reviewed discuss the actual and potential social and ethical issues associated with Artificial Intelligence (AI) technologies in policing (Almeida et al., 2021; Aizerberg and van den Hoven 2020; Alikhademi et al., 2022; Asaro 2019; Beck 2021; Bradford et al., 2022; Dechesne 2019; Ellison et al., 2021; Ernst et al., 2021; Hayward and Maas 2021; Hobson et al., 2021; Noriega 2020; Smith and Miller 2022; Wright 2021; Grimond and Singh 2020; McKendrick 2019; Whittelstone et al., 2019; Urquhart and Miranda 2021; Babuta 2017; Oswald 2019; Leslie 2019). Within these documents, the key social and ethical issues discussed are broadly focused on the following five key themes:

  • Reproduction of systemic bias of human decision makers
  • The need for ethical guidelines and laws for risk minimisation and improvements in efficiency potential
  • Issues of accuracy, fairness, and transparency
  • Specific risks of racial and gender bias
  • Potential for use by perpetrators of crime
a) Reproduction of Systemic Bias of human Decision Makers

One article discusses the issue of the potential reproduction of systemic bias from human decision making in the deployment of artificial intelligence. Alikhademi et al., (2022) discusses the use of Artificial Intelligence in predictive policing and reveals how they are susceptible to replicating the systemic bias of previous human decision-makers.

b) Accuracy, Fairness and Transparency

Nine of the documents reviewed discuss the issues of accuracy, fairness and transparency associated with artificial intelligence in policing (Alikhademi et al., 2022, Whittelstone 2019; Smith and Miller 2022; Beck 2021; Veale 2019; Hobson et al., 2020; Asaro 2019; Wright 2021; McKendrick 2019). For example, Beck (2021) discusses the issue of fairness in relation to the use of artificial intelligence in law enforcement, predictive policing and risk assessment and explains that concerns about fairness are rooted upon concerns about the prospects of bias and an apparent lack of operational transparency. Beck also shows how media coverage of the use of artificial intelligence can exacerbate these concerns but argues that potential solutions will be found through political and legislative processes that aim to achieve an acceptable balance between competing priorities. Hobson et al., (2021) focuses on the issue of fairness in relation to algorithmic policing and shows that members of the public tend to view a decision as less fair and appropriate when an algorithm decides compared to when a police officer decision and also shows how perceptions of fairness and appropriateness were strong predictors of support for police algorithms. They conclude that algorithm decision making may damage trust in the police, particularly in cases when the police rely heavily or solely on algorithmic decision making. Similarly, Asaro (2019) discusses the risks around the use of data-driven algorithms in policing and how this raises questions about fairness by effectively treating people as guilty of (future) crimes for acts they have not yet committed and may never commit, and how the use of predictive information systems may shape the decisions and behaviours of police officers.

c) Risks of Racial and Gender Bias

Four documents discuss the potential risks associated with racial and gender bias in relation to Artificial Intelligence in policing (Noriega 2020; Asaro 2019; McKendrick 2019; Whittelstone 2019). For example, Noriega (2020) acknowledges how racial and gender bias may be embedded in the design and implementation of artificial intelligence technologies, however they also discuss the potential of artificial intelligence to promote a non-biased environment during police interrogation for mitigating racial and gender divides in statistics regarding false confessions.

d) The Need for Ethical Guidelines and Law to Minimise Harm

Nine of the documents discuss the need for clear, ethical guidelines and laws to minimise the potential harms associated with the use of artificial intelligence technologies in policing (Almeida et al., 2021; Asaro 2019; Bradford et al., 2022; Dechesne 2019; Ernst et al., 2021; Leslie 2019; Whittelstone 2019; Urquhart and Miranda 2021; Oswald 2019).

e) Potential for Use by Perpetrators of Crime

One article explores the potential risk of the application of Artificial Intelligence technologies by perpetrators of crime (Hayward and Maas 2021).

3.1.2.3: Voice Recognition Technologies

Two documents discuss social and ethical considerations of the use of voice recognition technologies in police practice (Lindeman et al. 2020; McKendrick 2019). Lindeman et al., (2020) explore how voice recognition technologies, as well as mobile, cloud, robotics and connected sensors are associated with concerns related to privacy and security and political and regulatory factors affecting interoperability, as well as concerns about a lack of standards. McKendrick (2019) also explains that voice recognition technologies are associated with concerns regarding human rights and a lack of well-established norms governing the use of AI technology in practice.

3.1.3: Surveillance Systems and Tracking Devices

Fifty-two of the 173 documents in the final sample discussed the social and ethical implications of surveillance technologies and tracking devices in policing practice. The following specific types of these emerging technologies were discussed in the literature:

  • Drones
  • Smart devices and sensors
  • Location and ‘Hot spot’ analysis
  • Body worn cameras
  • Autonomous security robots
  • CCTV and visual/optical technologies
3.1.3.1: Drones

Seven of the documents reviewed discuss the social and ethical implications associated with drone technology (Klauser 2021; Miliaikeala et al., 2018; Milner et al., 2020; Page and Jones 2021; Rosenfield 2019; Sakiyama et al., 2017; Wall 2016). The specific social and ethical issues associated with these forms of technology were:

  • Legitimacy of use by police departments
  • Issues of the development of an aerial geopolitics of security
  • Public confidence and trust
  • Concerns relating to racial bias
  • Privacy
a) Legitimacy of use by police departments

One of the articles reviewed discusses the issue of the legitimacy of drone use by police departments in the United States (Miliakeala et al., 2018). They examine how unmanned aerial vehicles [UAVs or drones'] have come into routine police practices and show that public attitudes toward police use of UAVs, and visual monitoring technology overall, is mixed owing to concerns about perceptions about police legitimacy and other criminal justice issues.

b) Issues of the development of an aerial geopolitics of security

Two articles example the issue of the development of a new aerial geopolitics of security as a result of the implementation of drones in security and policing (Klauser 2021; Milner et al., 2020). For example, Klauser (2021) explores the expectations and practices of new police drones in Switzerland to show how drones are transforming the ways in which the aerial realm is perceived within the context of policing, which has significant implications for power relations between the police and the public and for social governance.

c) Public confidence and trust

Five of the documents explore issues pertaining to public confidence and trust (Miliakeala et al., 2018; Milner et al., 2020; Page and Jones 2021; Rosenfield 2019, Sakiyama et al., 2017). For example, Milner et al., (2021) discuss how public opinion can affect the success of the use of these technologies and critically examine proposals for using drones to monitor political protests in the US.

d) Concerns relating to racial biases in the deployment of these technologies

Three articles discuss concerns relating to racial bias in the deployment of drones in police practice (Page and Jones 2021; Sakiyama et al., 2017; Wall 2016). For example, Page and Jones (2021) examine how in recent years US police departments have incorporated new aerial technologies that promise to make policing more efficient and "race-neutral," including drones, which are positioned as unbiased and intended to function as an anti-emotional third-party witness to exchanges between the state and public. However, they found that the supposed accountability offered by these technologies does not upend the disciplining of emotion and examine how video footage demonstrates how ethnic minorities (especially women) regulate their emotional reactions to state violence both despite and because of the presence of these devices. Wall (2016) discusses the issue of state violence and routine police surveillance in the US which has gained recent attention as a result of the Black Lives Matter movement and argues that, if unregulated, these forms of technology, may increase the risks of accusations of state violence against minority groups when deployed in domestic policing contexts.

e) Privacy

Two of the documents discuss concerns relating to privacy in relation to the deployment of drones in policing practice (Sakiyama et al., 2017; Rosenfield 2019). Sakiyama et al., (2017) examines how the use of this form of technology, in general, and within the particular context of domestic policing activities, raises serious concerns about personal privacy and the greater intrusion of new forms of `big brother' surveillance in people's daily lives. They also examine socio-demographic differences in the public support for drone usage in this context. Rosenfield (2019) explains that the introduction of drones in the traffic enforcement context can lead to public acceptance challenges which can severely hinder their potential impact and discuss how privacy and safety are the main concerns expressed with regards to such technology in both the US and Israeli contexts.

3.1.3.2: Smart Devices and Sensors

Sixteen of the articles discussed social and ethical issues associated with the use of smart devices and sensors in policing (Braga et al., 2013; Brandt et al., 2021; Brewster et al., 2018; Catte et al., 2021; Ekabi et al., 2020; Joh 2019; Joyce et al., 2013; Kuo et al., 2019; Moon et al., 2017; Paterson and Clamp 2014; Sandhu and Fussey 2021; Stone 2018; Tulumello and Iapado 2021; Weaver et al., 2021; Whitehead and Farrell 2008; Urquhart, Miranda and Podoletz, 2022). Two overarching key ethical issues were identifiable within this body of literature. These were:

  • The issue of privacy
  • Trust and legitimacy of police use
a) Privacy

Seven of the articles discussed the issue of privacy in relation to the use of smart devices and sensors in relation to police activities and investigations. For example, Joh (2019) explain that as policing becomes increasingly ‘smarter’, concerns regarding the level of increased surveillance that highly networked systems pose are rising. They also discuss how these devices mean that police services will be required to spend more time watching the outputs of these devices and will also have less freedom themselves from being watched.

b) Trust and legitimacy of police use

Four of the articles spoke to the issue of trust and the legitimacy of police use in relation to the deployment of these devices (Moon et al., 2017; Joyce et al., 2013; Paterson and Clamp 2014; Braga and Schnell 2013). For example, Joyce et al., (2013) examine how the introduction of smart policing initiatives requires ongoing collaboration with both the public and with researchers to maintain trust.

3.1.3.3: Location and ‘Hot Spot’ Analysis Tools

Four of the documents reviewed discuss the social and ethical issues relating to location and hot spot analytical technologies (Koper et al., 2015; Nellis 2014; Braga and Schnell 2013; Hendrix et al., 2013). Key issues identified were:

  • Effectiveness in reducing crime
  • Challenges concerning the legitimacy of product selection
  • Lack of guidance or integration of technology within specific crime reduction agendas
a) Effectiveness in reducing crime

Two of the documents discussed the effectiveness of these technologies in actually reducing crime (Koper et al., 2015; Braga and Schnell 2013). Koper et al. (2015) examined the use of mobile technology in hot spot policing in the US and found that officers used these technologies primarily for surveillance and enforcement (e.g., checking automobile license plates and running checks on people during traffic stops and field interviews), but not for strategic problem-solving and crime prevention and concluded that the applications of mobile computing may have little if any direct, measurable impact on officers’ ability to reduce crime in the field. Braga and Schnell (2013) examined the Boston Police Department’s implementation of the Safe Street Teams program to control "hot spots" using a Smart Policing Initiative to assess its ability to prevent violent crime.

b) Challenges concerning the legitimacy of product selection

One article (Nellis 2014) discussed how England and Wales have been privatizing its probation service and creating an advanced electronic monitoring scheme using combined GPS tracking and radio frequency technology. Nellis (2014) examines how providers of these technologies had been overcharging the government for their services, resulting in a series of enquiries and concerns about the legitimacy of the adoption of these technologies.

c) Lack of guidance or integration of technology within specific crime reduction agendas

One article raises concerns over the issue that police departments may adopt these technologies without giving proper consideration to how this form of technology fits within their guiding philosophy or operational goals (Hendrix et al., 2019).

3.1.3.4: Body Worn Cameras

Twenty-three of the documents reviewed discussed the social and ethical implications associated with body worn cameras (Lum et al. 2017; White et al., 2018; Ariel et al., 2015; Saulnier et al., 2020; Smykla et al., 2016; Huff et al., 2018; Miranda 2022; Todak et al., 2018; Backman and Lofstrand 2021; Stalcup and Helm 2016; Bromberg et al., 2020; Stone 2018; Cuomo and Dolci 2021; Gramagila and Phillips 2018; Hamilton-Smith et al., 2021; Healey and Stephens 2017; Henne et al., 2021; Miliaikaela et al., 2018; Hood 2020; Murphy and Estcourt 2020; Page and Jones 2021; Ray et al., 2017; Sahin and Cubukcu 2021). The following different types of social and ethical issues were highlighted within this body of literature:

  • Implications for public-state relationships
  • Impacts on police officers and police practice
  • Concerns about racial biases inherent in deployment of the technology
a) Implications for public-state relationships

Eleven of the documents discussed the impacts of these technologies for public-state relationships (White et al., 2018; Saulnier et al., 2020; Todak et al., 2016; Hamilton-Smith et al., 2021; Healey and Staples 2017; Lum et al., 2019; Ariel et al., 2015; Brockman and Lofstrand 2021; Bromberg et al., 2020; Murphy and Estcourt 2020; Smykla et al., 2016). For example, Ariel et al., (2015) examined the effects of body worn on cameras on complaints against the police, while Hamilton-Smith et al., (2021) examined the interplay of police techniques and surveillance technologies in the policing of Scottish football. They found that that several practices were considered intimidatory and argued that the use of technologies such as powerful hand-held cameras and body worn video (BWV) has had a detrimental impact on police-fan relationships, interactions, and dialogue. Murphy and Estcourt (2020) examine concerns around privacy in relation to body-worn cameras in both the Australian and US contexts.

b) Impacts on police officers and police practice

Four of the documents discuss the impacts of body worn cameras for members of the police service as well as on policing practice (Gramagila and Phillips 2018; Henne et al., 2021; Huff et al., 2018; Miranda 2022). For example, Gramagila, and Philips (2018) found that police officers in the US and UK wanted to have the ability to be able to review body camera images prior to writing a report, while Henne et al., (2021) discuss how the introduction of body worn cameras by police officers has been a popular response to public demands for greater police accountability, particularly in relation to racially marginalised communities and argue that the use of body worn cameras redefines police violence into a narrow conceptualisation rooted in encounters between citizens and police and can direct attention away from the structural conditions and institutions that perpetuate police violence. Miranda (2022) discusses the challenges that have been faced during the implementation of this form of technology in the UK police force context to identify the practical and techno-social challenges associated with these technologies and the interrelationships between these types of challenges. They conclude that use of these cameras and how they operate technically are connected, which raises significant ethical issues for data management and storage.

c) Concerns about racial biases inherent in deployment of the technology

Two articles discuss concerns about racial biases and inequalities in relation to the deployment of these forms of technology (Murphy and Estcourt 2020; Hood 2020). Murphy and Estcourt (2020) explain how the use of body worn cameras and other surveillance devices may contribute to the over-surveillance of minority communities. Similarly, Hood (2020) explores how these technologies may result in a leveraging political power and racial marginalization.

3.1.3.5: Autonomous Security Robots

Only one of the documents reviewed referred to the issues associated with autonomous robots in relation to policing practice. Asaro (2019) considers the ethical challenges facing the development of robotic systems that can potentially deploy violent and lethal force against humans. Although the use of violent and lethal force is not usually acceptable, police officers are authorized by the state to use violent and lethal force in certain circumstances in order to keep the peace and protect individuals and the community from an immediate threat. With the increased interest in developing and deploying robots for law enforcement tasks, including robots armed with weapons, Asaro (2019) discusses the problem of design human-robot interactions (HRIs) in which violent and lethal force might be among the actions taken by the robot.

3.1.3.6: CCTV and Visual/Optical Technologies

Six of the documents within the sample discuss the social and ethical issues associated with Closed Circuit Television and other Visual/Optical forms of Technology (Aston et al., 2022; Dunlop et al., 2021; Miliakeala et al., 2018; Brockman and Jones 2022; Clavell et al., 2018, Lauf and Borrion 2021). For example, Dunlop et al., 2021 examine how these forms of technology play an important role in preventing and responding to hate crime, which can improve police-community relationships. However, in their review of the use of CCTV in British homicide investigations, Brookman and Jones (2022) argue that although CCTV is used more frequently than any other kind of forensic science or technology to both identify and charge suspects, particular challenges are associated with how CCTV footage is recovered, viewed, shared, interpreted, and packaged for court. In particular, the lack of clear standards and principles can be especially problematic. Clavell et al., (2018) argues that if these technologies are not managed correctly, they can result in increased victimization, inequalities, or inefficiency. Aston et al., (2022) examine the use of mobile devices by the police and the public using the concept of the ‘abstract police’ to consider the impact of mobile surveillance technologies on legitimacy between members of the public and the police, as well as internally within police departments.

Table 1 shows a summary of the findings of the social and ethical issues associated with each type of technology.

With the key social and ethical considerations associated with the various types of emerging technologies having been identified in this sub-section, the next sub-section will focus on the legal considerations associated with the adoption of emerging technologies in policing.

Table 1 - Summary of Findings: Social and Ethical Issues Associated with Emerging Technologies by Technology Type and Specific Technology

Technology Type

Electronic Databases

Specific Technology

Data Sharing and Third-Party Data Sharing Technologies

Associated Social and Ethical Issues

  • Safety of Information Held
  • Human Rights and Privacy
  • Lack of Standardisation & Accountability
  • Differences in Organisational Practices
  • Bias Embedded in Data Storage Practices

Technology Type

Electronic Databases

Specific Technology

Community Policing Application Data

Associated Social and Ethical Issues

  • Risk of Enhancing Racial Inequalities
  • Issues with Exclusion and Social and Technological Capital
  • Maintaining Public Trust

Technology Type

Electronic Databases

Specific Technology

Data Pulling Platforms

Associated Social and Ethical Issues

  • Reflective of Inequalities in Policing Resources
  • Reflective of Unequal Distribution of Power between Different Policing Organisations

Technology Type

Electronic Databases

Specific Technology

Social Media Application Information

Associated Social and Ethical Issues

  • Lack of Alignment in Organisational Culture
  • Legitimacy of Action
  • Management and Use of Sensitive Data
  • Risk of Enhancing Social Injustices

Technology Type

Electronic Databases

Specific Technology

Use of Open-Source Data

Associated Social and Ethical Issues

  • Risk of Increased Victimisation
  • Risk of Unnecessary Pre-Emptive Police Action & Over-Policing

Technology Type

Electronic Databases

Specific Technology

Vulnerable Population Data

Associated Social and Ethical Issues

  • Risk of Over-Surveillance of Vulnerable Populations
  • Human Rights and Justice
  • Questions over the Need for Consultation & Communication with Vulnerable Groups on How Data will be Stored and Used
  • Lack of Guidance for Data Collection and Management

Technology Type

Electronic Databases

Specific Technology

DNA Databases

Associated Social and Ethical Issues

  • Issues Arising from Poor Storage Management
  • Lack of Ethical Norms regarding Storage of Biomedical Data

Technology Type

Biometric Identification Systems

Specific Technology

Facial Recognition Technologies

Associated Social and Ethical Issues

  • Trust and Legitimacy
  • Bias Against Marginalised Groups
  • Privacy and Security
  • Lack of Standardisation, Ethical Principles and Guidelines

Technology Type

Biometric Identification Systems

Specific Technology

Artificial Intelligence

Associated Social and Ethical Issues

  • Reproduction of Systemic Bias of Human Decision-Making
  • Lack of Ethical Guidelines for Risk Minimisation
  • Use by Perpetrators of Crime

Technology Type

Biometric Identification Systems

Specific Technology

Voice Pattern Analysis Tools

Associated Social and Ethical Issues

  • Privacy and Security
  • Lack of Standards and Established Norms

Technology Type

Surveillance Systems & Tracking Devices

Specific Technology

Drones

Associated Social and Ethical Issues

  • Legitimacy of Use by Police
  • Development of an Aerial Politics of Security
  • Racial Bias & Privacy

Technology Type

Surveillance Systems & Tracking Devices

Specific Technology

Smart Devices and Sensors

Associated Social and Ethical Issues

  • Trust and Legitimacy
  • Concern over Privacy and Risk of Over-Policing of Certain Groups

Technology Type

Surveillance Systems & Tracking Devices

Specific Technology

Location and Hot Spot Analysis

Associated Social and Ethical Issues

  • Questionable Effectiveness in Reducing Crime
  • Challenges over Legitimacy of Product Selection
  • Lack of Guidance for Integration in Crime Reduction Agendas

Technology Type

Surveillance Systems & Tracking Devices

Specific Technology

Body-Worn Cameras

Associated Social and Ethical Issues

  • Implications for Public-State Relationships
  • Racial Bias

Technology Type

Surveillance Systems & Tracking Devices

Specific Technology

Autonomous Security Robots

Technology Type

Surveillance Systems & Tracking Devices

Specific Technology

CCTV & Visual/Optical Technologies

Associated Social and Ethical Issues

  • Lack of Standards for how Footage is Retrieved, Viewed and Stored
  • Risk of Increased Victimisation
  • Questions of Legitimacy

3.2: Legal Considerations Associated with the Adoption of Emerging Technologies in Policing

UK case law identified as relevant to the adoption of emerging technologies in policing is set out in Appendix 3. International case law considered to be relevant is set out in Appendix 4 and Appendix 5 sets out the key provisions of significant legislation, the technologies to which they may apply and, if available, cites the relevant case law addressing that legislative provision. Appendix 5 can be used as a useful tool against which to evaluate the legal issues/considerations that may be presented by a specific piece of emerging technology.

3.2.1: The Law of Evidence and Emerging Technology

3.2.1.1: Improperly obtained evidence

Evidence can be obtained in a number of ways including search of premises, search of persons, search of personal property, taking of biological samples and the use of surveillance technologies. In each case for such evidence to be considered ‘legally obtained’ it must comply with the rules of evidence. Where it does not do so, evidence will be considered to have been obtained improperly. Where information is improperly obtained its admissibility can be questioned. The common law rule is that the admissibility of improperly obtained evidence is a balancing exercise considering on the one hand, ‘the interest of the citizen to be protected from illegal or irregular invasions of his liberties by the authorities’ and on the other ‘the interest of the State to secure that evidence bearing upon the commission of crime and necessary to enable justice to be done’.[5] Importantly, it has been recognised that such evidence should not be ‘withheld from the Courts of law on any merely formal or technical ground.’[6] In addition to the possible challenges in terms of admissibility, where information is improperly obtained, the most likely ground of challenge are Article 5 (Right to Liberty and Security), Article 6 (Right to a Fair Trial), and Article 8 (Right to Private and Family Life) of the European Convention of Human Rights.[7]

Beyond the common law principles, there are statutory forms of regulation that will impact on the construction of whether or not intelligence or evidence has been legally obtained. These would include the Criminal Procedure (Sc) Act 1995, Regulation of Investigatory Powers (Scotland) Act 2000, Police Act 1997, Investigatory Powers Act 2016 as well as compliance with the National Assessment Framework for Biometric Data Outcomes and prospectively the Scottish Biometric Commissioners’ Code of Conduct.[8]

The use of emerging technologies is highly likely to challenge the boundaries of these legislative measures.[9] For example, the Criminal Procedure (Sc) Act 1995 s18(3) requires that all records of physical data, taken from an individual in custody, be destroyed as soon as possible, when it is decided proceedings are not to be raised, or do not conclude in a conviction. If Police Scotland were to consider the development of a specific biometric database or the deployment of technologies that are dependent on such data, they have to consider how such a database is to be populated and adopt an appropriate destruction policy that complies with s18.

The regulation of prints and samples is addressed in s18-20 of the CPSA 1995. These provisions provide lawful authority for the retention and use of such samples including use of any data derived from those samples.[10] Accordingly, where any emerging technology is used for the purposes of cataloguing, analysing, or storing such data consideration must be given to these provisions. The Code of Practice issued by the Scottish Biometrics Commissioner has the potential to address the ambiguity surrounding the boundaries of such provisions and is to be welcomed.[11]

Recently, and following some controversy, the Police, Crime, Sentencing & Courts Act 2022 introduced a system of regulation specifically focused on authorisation of the extraction of information from electronic devices.[12] These provisions should offer clarity on the process to be followed and the limitations on the extraction of information. Data can be extracted, if a device is voluntarily provided, for the purposes of preventing, detecting, investigating, or prosecuting crime, helping to locate a missing person, or protecting a child or an at-risk adult from neglect or physical, mental, or emotional harm.[13] Importantly, there are restrictions on the scope of these purposes. For example, the authorised person must reasonably believe that information stored on the electronic device is relevant to a reasonable line of enquiry which is being, or is to be, pursued by an authorised person.[14] Further, if the extraction of data is likely to include data beyond that relevant to the specific issue, there should be a test of proportionality.[15] In addition, the authorised person should ensure they are compliant with the Code of Practice.[16] There are significant protections offered to children in that those under 18 do not have capacity to consent to the extraction of data from their devices. The implication of this is that an authorised person would have to establish the age of the person consenting in order to ensure the legality of any subsequent extraction of data.[17]

3.2.1.2: Disclosure of evidence

Part 6 of the Criminal Justice & Licensing (Sc) Act 2010 set out the rules of disclosure. Those rules require that an investigating agency provide all information relevant to a case for or against the accused that the agency is aware of, that was obtained in the course of investigating.[18] In addition, if requested to do so by the prosecutor, the investigating agency must provide the prosecutor with any of that specific information.[19] Here, information is defined as any ‘material of any kind’.[20] This is important in the context of emerging technologies because when designing or selecting those technologies, consideration should be given to if, and how, information can be shared with the Crown Office & Procurator Fiscal Service so that they can comply with their obligation to disclose information to the defence.[21] A failure to do so at an early stage may impact on the ability of the police to comply with the rules of disclosure. Cases could be challenged on the basis of the prejudicial effect of that information not being made available.[22] This is likely to present a particularly acute problem as automated decision-making systems, artificial intelligence and ultimately algorithm become more embedded in policing practice and since the transparency of such systems is problematic[23]

At the international level, measures are being developed that seek to facilitate the disclosure of electronic evidence. This has most recently taken the form of a Protocol to the Cybercrime Convention.[24] Although the UK is not a signatory, at this point in time, it will enter into force once there are five ratifications. The protocol seeks to enhance cooperation between states to ensure that offences recognised by the cybercrime convention (such as illegally accessing a computer system, data interference, misuse of devices etc) can be effectively investigated and prosecuted.[25] It goes so far as to require that parties introduce domestic legislation that facilitates the disclosure of personal information from providers of domain name registration services.[26] It also requires that parties introduce domestic measures that facilitate the sharing of subscriber information[27] While not specifically focused on police-citizen relationships per se, if does form part of the framework that facilitates the lawful basis on which evidence can be exchanged and this in turn will impact on the transparency, accountability, and the connected trust the police service are able to foster.[28] In both cases of improperly obtained evidence and failures in the disclosure of evidence there is the potential to erode public trust in the police service.

Recommendation

Compliance with the law of evidence and the rules of disclosure will impact on the lawfulness of evidence, its admissibility, and the protection of rights (specifically Article 6 Right to a Fair Trial and Article 8 Right to privacy). Therefore, at the outset of designing, adapting, or adopting an emerging technology consideration should be given to how that technology is to be used. This means identifying whether that technology is being used to collect evidence or intelligence and whether it is being used is an overt or covert manner so that the appropriate procedural law can be complied with.

3.2.2: Data Protection

The Data Protection Act 2018 sets out the principles that must be complied with in the processing of ‘personal data’. Part III specifically sets out the regulation of personal data in the context of law enforcement processing. There are additional safeguards required when such processing includes sensitive processing.[29] That includes processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs or trade union membership; (b) the processing of genetic data, or of biometric data, for the purpose of uniquely identifying an individual; (c) the processing of data concerning health; (d) the processing of data concerning an individual's sex life or sexual orientation.[30] It is highly likely that many of the emerging technologies will involve such processing.

Of particular relevance in the context of emerging technologies, there is a prohibition on the use of automated processing as the sole foundation of decision making.[31] This prohibition only applies where the decision is a ‘significant’ one, which means either it ‘produces an adverse legal effect concerning the data subject’ or ‘significantly affects the data subject’.[32] In order to be able to use automated processing as the sole foundation of decision making it must be authorised by law and ‘the controller must, as soon as reasonably practicable, notify the data subject in writing that a decision has been taken based solely on automated processing.’[33]

An integral part of complying with the provisions of the data protection framework is that there needs to be a clear mapping of what and how data is being processed. In addition, in order for that processing to be lawful, there is a need to have an appropriate policy document in place.[34] The second data protection principle is likely to play an important part in the assessment of emerging technologies since it requires that where data is being collected for law enforcement purposes those purposes are ’specified, explicit’ and ’legitimate’ and that data must not be processed for an incompatible purpose.[35] The legislation expressly prohibits the processing of data for non-law enforcement purposes unless it is authorised by law.[36] Within the legislation ’law enforcement purposes’ is given a narrow definition of ’the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security’.[37] For example, this would not encompass the processing of data relating to an individual who is a missing person (unless it involved the assertion of either a criminal offence having been committed or that the individual presents a public security risk).

Since it is well recognised that police are often engaged in activities considerably beyond the scope of this narrow definition, Police Scotland would have to ensure that there is an appropriate ‘authorisation in law for that processing’. In the context of the use of emerging technologies, where this is likely to be important is in the interaction between private sector organisations who may be engaged in facilitating the development of the technology or providing services to those involved in policing. Of particular importance will be the restriction on the use of data for development of the technology itself or the sharing of data with third parties. We already have a number of warning cases in the design and implementation of cyberkiosks, the development of track and trace apps for the purposes of public health and the use of biometric identification services provided by commercial entities. Each of these examples is addressed further below (Section 3.5)

Again, the automation of aspects of data processing is likely to present challenges for compliance with the data protection principles set out in Part III. This is because as artificial intelligence progresses there may be difficulties the accountability and transparency of its operation. With this in mind, it will be necessary to ensure that appropriate procedures and in place to ensure consideration is given to whether where there is ambiguity in the quality, reliability, or transparency in how data is being processed by automated means.

Recommendation

In order to ensure robust data protection compliance:

1. Data protection policies should be kept under regular review to ensure that they capture the development and use of emerging technologies in the context of policing

2. A data impact assessment (DPIA) should be carried out prior to the development of any emerging technology (and revised as it progresses through from trial to deployment)

3.2.3: Equality and Human Rights

This report has focused on the use of emerging technologies in the context of police-citizen interactions. Careful consideration must be given to the characteristics of the affected citizen/citizen group that may present additional legal and ethical questions. For example, where the emerging technology is to be deployed in a context involving children (defined as those under 18), steps will have to be taken to ensure compliance with the United Nations Convention on the Rights of the Child. Following a ruling of the Supreme Court in October 2021 issues were raised concerning the constitutional validity of the Scottish Parliaments’ attempt to incorporate the convention into domestic law.[38] However, it is the intention of the Scottish Government to pursue implementation. The consequences of this are as yet unclear and merit a specific piece of research that can fully explore how children rights may be affected by the implementation of emerging technologies in the context of policing and how those rights can be appropriately secured.

Recommendation

Further research required to consider the legal and ethical implications for the use of emerging technologies in policing activities involving children, with a view to ensuring compliance with the United Nations Convention on the Rights of the Child.

As it stands, public authorities are bound by the public sector equalities duty set out in section 149 Equality Act 2010. The scope of this duty is that they “must, in the exercise of its functions, have due regard to the need to:

(a) eliminate discrimination, harassment, victimisation, and any other conduct that is prohibited by or under this Act.

(b) advance equality of opportunity between persons who share a relevant protected characteristic and persons who do not share it.

(c) foster good relations between persons who share a relevant protected characteristic and persons who do not share it.”[39]

The definition of public authority includes the Scottish Biometrics Commissioner[40], the Scottish Police Authority[41], the Chief Constable of the Police Service of Scotland.[42] As a result, when considering the introduction of emerging technologies public authorities engaged in determining policing policy and practice need to consider if the deployment of that technology complies with this public sector equality duty.[43] It should perhaps be acknowledged that the jurisprudence of the Courts determines that this does not mean that there has to be an equality impact assessment in order to discharge this duty.[44] Nor does it suggest that a ‘box-ticking’ exercises will necessarily secure compliance with the duty.[45] Rather the focus is on the substance and circumstances of what has occurred. Ultimately, asking whether ‘due regard’ was given. That being said, it is clear that from a practical perspective conducting an equality impact assessment is likely to be one mechanism to support organisations in ensuring that ‘due regard’ is had and evidencing that. While these considerations are mainly matters of governance, sight should not be lost of the fact that the deployment of emerging technologies without such consideration has the potential to exacerbate discrimination, harassment, and victimisation.[46] For example, this reality can be seen in the concerns raised in the deployment of facial recognition technologies discussed below (Biometric Identification Systems).

All emerging technology used in a policing context will have the potential to impact on human rights. It is possible that technologies may support human rights protection. For example, if police officers were to have access to real time translation, they would be able to support the Article 6 right to a fair trial by providing information promptly on the nature and cause of the accusation against them.[47] However, there are equally many concerns with the potential negative impacts.[48]

The majority of legal challenges to the deployment of new technologies have been framed in terms of the Article 8 right to privacy.[49] Examples of these challenges are cited where relevant in the discussion below.

3.2.3.1: Databases

The legal regulation of the design and operation of databases is dictated predominantly by the Data Protection Act 2018. The key provisions likely to be triggered by emerging technologies are set out in the table of legislation in Appendix 5. In the main, the key factor is to determine whether a database concerns the processing of personal data and whether that data is of a sensitive nature as this assessment will determine if it is regulated by the DPA and if yes, how.

In the UK, to date, the use of databases has been challenged on the basis that they breach data protection law and that inclusion of personal data on them is ultimately an infringement of Article 8 ECHR. These challenges have related to three specific aspects whether the data should be retained, for how long, and at what point should it be deleted.[50]

3.2.3.2: Biometric Identification Systems

In many respects there is overlap between the regulation of databases and the operation of biometric identification systems. By and large such systems will be operationalised through the use of a database of some kind. However, the key issue here is that biometric information is inherently sensitive personal information and so demands greater protection. Biometric data has most recently been given a statutory definition as “information about an individual’s physical, biological, physiological, or behavioural characteristics which is capable of being used, on its own or in combination with other information (whether or not biometric data), to establish the identity of an individual, and may include: (a) Physical data comprising or derived from a print or impression of or taken from an individual’s body, (b) A photograph or other recording of an individual’s body or any part of an individual’s body, (c) Samples of or taken from any part of an individual’s body from which information can be derived, and (d) Information derived from such samples.”[51]

As noted earlier, Scotland has its own Biometrics Commissioner. Their role is to “to support and promote the adoption of lawful, effective and ethical practices in relation to the acquisition, retention, use and destruction of biometric data for criminal justice and police purposes”.[52] In April 2022, they issued their draft Code of Practice. [53] The Guiding Principles and Ethical Considerations are set out in the box below.

The code of practice applies to Police Scotland, The Scottish Police Authority, and the Police Investigations and Review Commissioner. When acquiring, retaining, using, or destroying biometric data, these authorities must ensure compliance with this code of practice (once finalised).[54]

Draft Code of Practice On the acquisition, retention, use and destruction of biometric data for criminal justice and police purposes in Scotland (April 2022)

Guiding Principles and Ethical Considerations

1. Lawful Authority and Legal Basis

2. Necessity

3. Proportionality

4. Enhance public safety and public good

5. Ethical behaviour

6. Respect for the human-rights of individuals and groups

7. Justice and Accountability

8. Encourage scientific and technological advancement

9. Protection of children, young people, and vulnerable adults

10. Promoting privacy enhancing technology

11. Promote Equality

12. Retention periods authorised by law

Although a failure to comply with the code will not in itself give rise to grounds for legal action, compliance with the code must be taken into account by a Court or tribunal in any proceeding whether civil or criminal to which the code may be relevant.[55] What this means in practical terms is that it must be taken into account in deciding whether evidence has been improperly obtained, or for example, in disciplinary proceedings in relation to a specific officer’s professional conduct.

As with the introduction of any code of practice, it will be necessary to ensure that those engaged in the biometric data supply chain are given appropriate training.

The collection and use of biometric data (in the form of facial recognition) has faced significant judicial attention in the English and Welsh Courts through the decision in R (on the application of Bridges) v Chief Constable of South Wales.[56] This involved an examination of the trial use of automatic facial recognition software. Importantly, the case involved the overt use of such systems and so its findings do not address the issues that may be presented by covert use and compliance with the Regulation of Investigatory Powers Act 200. However, the conclusions of the Court merit detailed consideration and are contained verbatim in Appendix 3 and are summarised in the box at Section 3.5.2.

There are a few critical aspects to their findings. Firstly, they made clear that in evaluating whether there had been an interference with an Article 8 right to privacy, and whether that interference was “in accordance with the law” as a basic rule, “the more intrusive the act complained of, the more precise and specific the law must be.”[57] Law here is construed broadly to include the policies that accompany legal frameworks. Following on from this, the Court in this case were critical of the governance framework in use by the police force at the time of the pilot. In particular, the Courts expressed that there were two areas of discretion that merited greater control: the selection of those individuals who would be included on watch lists when AFR Locate was being deployed and the locations where AFR Locate might be deployed. Further, despite their being a data impact assessment, that assessment had failed to grasp the risk to the human rights and freedoms of data subject. In addition, in order to comply with the public sector equality duty, the police force should had taken steps to specifically evaluate its potential discriminatory impacts (before, during and after the trial). The fact that the police force failed to evaluate the software it had not ‘done all that it reasonable could do’ to discharge its duty.[58]

The Bridges decision noted the role of the Surveillance Camera Commissioner, and that this role would specifically cover the use of surveillance cameras in the context of AFR.[59]Of particular note, the Court recognised the value of the Code of Practice issued by the Secretary of State and the guidance produced by the Surveillance Camera Commissioner on the ‘Police Use of Automated Facial Recognition Technology with Surveillance Camera Technology’ issued in March 2019. However, they were critical of the generic nature of each. They emphasised that of particular concern, was that there was no specific policy addressing the justification for the inclusion of an individual on a watch list and similarly no policy providing a justification of selecting particular locations for the use of AFR.

Since the Bridges decision, the role of the Biometrics Commissioner has been brought together with the Surveillance Camera Commissioner. As a result, one individual is appointed to perform this dual role. However, it is important to note that while the Code of Practice and the role of the Biometric and Surveillance Camera Commissioner’s relates to England and Wales, they are influential in the framework of accountability. This will particularly be the case where working across borders will require compliance with the frameworks of England and Wales and those in Scotland.

3.2.3.3: Electronic Surveillance and Monitoring Systems

The use of emerging technology for the purposes of electronic surveillance and monitoring will be subject to the regulatory frameworks set out in, the Regulation of Investigatory Powers (Scotland) Act 2000, the Regulation of Investigatory Powers Act 2000, the Police Act 1997 and the Investigatory Powers Act 2016.[60] Where there is a failure to comply with these systems of regulation there is the potential for there to have been a breach of human rights, most likely Art 8 Right to Private and Family life. In addition, it may impact on the potential for the admissibility of evidence that results from such actions to be challenged (Ross et al, 2020).

The Regulation of Investigatory Powers (Scotland) Act 2000 sets out the framework for the regulation of surveillance addressing ‘directed surveillance’ and ‘intrusive surveillance’ and can be used as one illustration of how the current patchwork of legislation may impact on the lawful use of emerging technology.[61] Surveillance is directed when it is covert surveillance that is not intrusive, relates to a specific investigation or operation, and is likely to result in obtaining private information about a person who may or may not be the focus of an investigation and takes place in circumstances where authorisation is not possible.[62] Intrusive surveillance is covert surveillance that focuses on residential premises or private vehicle where that surveillance is carried out by an individual but also where it is carried out by means of a surveillance device.[63] Importantly, surveillance will not generally be considered intrusive when a surveillance device is not specifically located on the premises/in the vehicle subject to surveillance, but it will be considered intrusive if such remote surveillance technology can achieve a sufficiently reliable quality of data.[64]

Authorisation of directed surveillance can be given in circumstances where the designated person is of the view that the action is necessary and proportionate.[65] The grounds on which authorisation of directed surveillance can be given are much broader than that for which authorisation intrusive surveillance can be given. Directed surveillance can be authorised on the grounds that it is necessary to prevent or detect crime or prevent disorder, in the interests of public safety or for the protection of public health. On the other hand, intrusive surveillance can be authorised where it is considered necessary to prevent or detect serious crime.[66] Directed surveillance and intrusive surveillance must be authorised by the appropriate designated person in order to be lawful.[67]

The implications of these provisions for the use of emerging technologies is that it is clear that in order to be lawful, consideration has to be given the context and purpose for which the technology is being used e.g. covert use, directed surveillance etc. Thereafter, a system of authorisation will need to be embedded to ensure that the technology is accompanied by the appropriate authorisation.[68]

In assessing whether intrusive surveillance was necessary and proportionate consideration will be given to whether the same information could be obtained by other means. What this means in the context of emerging technology is that if a technology is developed that would obtain the same data, as an already available means, consideration should be given to whether the use of that technology is needed at all.

Regulation of Automated Decision Making

The Council of Europe Convention for the protection of individual with regards to the processing of personal data sets out the international framework that supports data protection. Applying to both public and private sector, it requires that state parties ensure that data should be processed in a matter that is fair and transparent, does not go beyond the scope of the original purpose and that it is only preserved in a form that allows identification for the shorted possible period of time.[69] While the Convention does allow the processing of special categories of data such as biometric data, and genetic data, it is only lawful when accompanied by appropriate safeguards.[70] Further, an individual should not be the subject of a solely automated decision making process unless their views have been taken into account.[71] Importantly, there can be an exception to this provision where it is necessary and proportionate for the prevention, investigation and prosecution of criminal offences.[72] These provisions are implemented into the domestic law through the Data Protection Act 2018 s49 and 50.

In 2017 the Council of Europe’s Committee of Experts on Internet Intermediaries completed a study into the human rights implications of automated data processing techniques.[73] The key issues they examined where automation, data analysis, and adaptability. They highlight that an examination of human rights implications must consider the relationship between specific algorithm used and the data set to which it is applied.[74] This is because while it is possible for the design of an algorithm to be inherently flawed by design, it is also possible that the data set to which it is applied contains a particular bias that is then replicated/magnified by the algorithm (Završnik, 2021). There are further challenges that result from human interrogation and interpretation of the algorithm (Binns, 2022). The expertise of the human user will be highly influential in whether the algorithms use is human rights compliant.

Specifically, concerns have been raised in the context of automated decision making and the use of artificial intelligence (Binns, 2022). At the moment, regulation of this in the UK is very limited. This is for two reasons. In some cases, the data driving the system would not meet the definition of personal data for the purposes of regulation i.e., it does not relate to an identified or identifiable individual. However, if such data is combined with other data to identify an individual then that will become regulated data. In other cases, the way in which the algorithm is applied to the data is opaque. What this means is that even if the data is captured by the data protection provisions, there will be a limited ability to achieve transparency in how that data is processed.

At this point in the UK the Office for Artificial Intelligence have issues Guidelines for the Procurement of AI in Government (2020) and they offer insights into key considerations. In addition, they have issued an Ethics, Transparency and Accountability Framework (2021) which is accompanied by an algorithmic transparency template (December 2021). Importantly, in contract to the Canadian system discussed later on, they are not legally binding.

The Justice and Home Affairs Committee of the House of Lords, in the UK have raised concerns that there is currently not central register of AI technologies.[75] Their view is that this is problematic because this lack of transparency means their uses cannot be interrogated and in turn, they cannot effectively be held accountable. In the context of policing, they argue that there should be a ‘duty of candour’ on the police to ensure transparency in the use of AI enabled technologies.[76]

Algorithms have the potential to mask human bias in a cloak of objectivity. The have the potential to exacerbate and escalate those biases.[77] The JHAC made clear that there is a significant “issue that there is no minimum scientific or ethical standards that an AI tool must meet before it can be used in the criminal justice sphere.”[78] To this end they suggest that the solution is to establish a national body who can engage in the process. Since the time of their report there has been some progress on this front with the Alan Turning Institute leading a project seeking to draft global technical standards.[79]

There are a number of international developments that can be drawn upon to help frame an ethical approach to the use of algorithms. For example, in May 2019 the OECD issues its Principles of Artificial Intelligence.[80] Shortly after, in June of that year the G20 issues its AI Principles and in November 2021 UNESCO adopted a Recommendation on the Ethics of AI.[81] This patchwork is soon to be further embellished by the work of the Council of Europe who are in the process of developing a convention on the use of AI that is due to be completed in 2023.[82]

3.3: Recommendations from the existing research examining the adoption and use of new emerging technologies in policing for best practice (including in relation to scientific standards and ethical guidelines) in the wider dissemination of these technologies in police practice

3.3.1: Electronic Databases

Thirteen of the documents from the research and policy-relevant literature reviewed offer specific recommendations, guidelines, and suggestions drawn from empirical research directly examining police practice for improving the use of electronic databases in policing (Asaro 2019; Aston et al., 2021; Babuta 2017; Babuta and Oswald 2020; Clavell 2018; McKendrick 2019; National Analytics Solutions 2017; Neiva et al., 2022; Neyroud and Disley 2008; Skogan and Hartnett 2005; Sanders and Henderson 2013; Weaver et al., 2021; Williams et al., 2021). Of these twelve focus on the management and use of data and data sharing technologies between third parties. One specifically makes recommendations for the use of social media technologies and data (Williams et al., 2021) and two specifically include recommendations for data pertaining to vulnerable people (Asaro 2019; Brabuta 2017). Only two of the documents reviewed present specific clear, evidence-based recommendations or guidelines for improving the use of community policing applications and data (Aston et al., 2021, Clavell et al., 2018). None of the documents reviewed presented evidence-based recommendations, guidelines, or examples from best practice for improving the use of data pulling platforms, DNA databases or for the use of open space data. This suggests that these represent areas that require further research to ascertain what might work best in disseminating these forms of technology more widely within policing practice.

3.3.1.1: Data Sharing Platforms and Third-Party Data Sharing

Of those that make specific recommendations for improving the use of databases and third-party data sharing platforms, Neiva et al., (2022) also recommends better management of the expectations of all professionals involved in working together with the police in criminal investigations and suggests that greater communication as to the needs of different sector organisations can held to strengthen the interoperability of working with multiple datasets, as well as help with managing data subjects’ privacy and human rights. They argue that this is especially important for when dealing with big data. Skogan and Hartnett (2005) use evidence from a study involving the Chicago Police Department’s experiences of centralised data warehouses to emphasise that the need for the host organisation to take an active role in clarifying expectations and setting standard.

Neyroud and Disley (2008) recommend strong transparent management and oversight of data sharing technologies with third party organisations is essential for minimising the risk of criticisms as to the legitimacy of police activity, while Sanders and Henderson (2013) draw upon evidence from Canada to emphasise the need for greater material, social and organisational integration to enable effective use of these technologies. McKendrick (2019) recommend clear transparency regarding the handling of data, especially by private companies and clear information and communications as to data access and limitations by third parties.

Weaver et al., (2021) offers a series of recommendations for improving the service connection juncture between police offers and health professionals over the management of suicide risk for subjects in police custody and argue that a balance needs to be made between risk assessment and communication between parties. They also discuss how an integrated approach can help facilitate evidence-based assessment, as well as inform the development of data collection, management and sharing processes.

Specific guidance for improving the use of these technologies by achieving greater standardisation of practices is provided by the National Analytics Solutions 2017 report. They argue that there is a need for: 1) Rebalancing the roles and responsibilities of the police professionals with other parts of government who have different cultures and practices regarding the collection, storing, processing and use of data, 2) assessments to be undertaken to establish best practice and decision making, 3) greater clarity over the legal obligations on data storage and processing across all parties, including with private sector third parties, 4) greater clarification over consent issues relating to data subjects, 5) a need for clarification regarding the duration of storage. In addition to minimise the risk of harm, they recommend that data risk assessments should be carried out and argue that Data Protection Laws should serve as the minimum standard of consideration (National Analytics Solutions 2017). They also provide an ethical framework for the management of data and data sharing. For this, they acknowledge that there is a need for ethically operated solutions that ensure that the public can trust the technology and that their privacy will not be placed at risk, arguing that the implementation of such a framework should be underpinned by four dimensions of society, fairness, responsibility, and practicality (ibid). However, they also acknowledge that additional research is required to devise and test an ethical standards framework specifically for big data (ibid). Similarly, Babuta (2017) specifically recommends the standardisation of concepts for entering information into police databases and calls for a standard lexicon across all parties. They also recommend the creation of shared MASH (Multi-Agency Safeguarding Hubs) to be created to allow for better data sharing practices and partner agencies, underpinned by the development of a clear decision-making framework at the national level to ensure ethical storage, management, and use of data (ibid).

3.3.1.2: Social Media Platforms and Data

Only one article includes specific recommendations for improving police practice regarding the use of social media technologies and data (Williams et al., 2021). Williams et al. (2021) recommends greater cooperation between policymakers, social science, and technology researchers for the development of workable, innovative guidance for working with social media data specifically in the policing of hate crime and malicious social media communications.

3.3.1.3: Vulnerable Population Databases and Datasets

Two documents discuss specific recommendations for vulnerable population databases and datasets (Asaro 2019; Babuta 2017). Asaro (2019) outlines the need for the development and implementation of an Ethics of Care approach to the management and use of data concerning vulnerable data subjects, whereas Babuta (2017) discusses how the management of data concerning vulnerable people is currently conducted in the UK using local police datasets. Babuta (2017) argues that local authorities, social services, and the police should collaborate closely when identifying vulnerable individuals in need of safeguarding and suggest that MASH databases would help to facilitate this. However, they argue that further research into the use of national datasets is necessary to gain a better understanding of the risks involved in the use of such technologies.

3.3.1.4: Community Policing Applications

Two of the documents presents specific clear, evidence-based recommendations for improving the use of community policing applications and data (Aston et al., 2021; Clavell et al., 2018). Aston et al., (2021) draws on evidence from interviews conducted with members of the public from minority backgrounds and members of organisations who work with minority population groups and police agencies in 9 countries in Europe to argue that community policing models, data protection and security procedures can enhance public confidence in sharing information with the police. Data protection and the potential abuse of information need to be addressed through secure storage of information and it is argued that demonstrating enhanced data security through improvements to data storage systems and protections and procedures can demonstrate a procedurally just approach that will improve public confidence in policing and in information sharing (Aston et al., 2021). Clavell et al., (2018) present a set of ethical guidelines drawn from empirical research to ensure that the use of technology in community policing does not result in increased victimisation, inequalities and inefficiency in its storage and use, and suggests that greater integration between academic researchers and the policy community is needed to develop and implement specific solutions that are sensitive to the needs of all parties (see example 1 on page 80).

3.3.2: Biometric Identification Systems

Twenty-one of the documents reviewed offer specific recommendations, guidelines, and suggestions drawn from empirical research examining police practice for the adoption and dissemination of biometric identification systems in policing (Alikhademi et al., 2022; Almeida et al., 2021; Asaro 2019; Babuta 2017; Babuta and Oswald 2017; Bradford et al.,

Example 1: Evidence-Based Recommendations for Best Practice: Implementation of ICT-mediated Community Policing Resources (Clavell et al., 2018)

In order to ensure a successful implementation of ICT-mediated community policing resources, the following aspects need to be taken into account:

1. Relevance

Clear needs, goals and demands have to be detected

2. Empowerment

The preferences of both LEAs and citizenry have to be taken into account.

3. Stakeholders

A wide scope of stakeholders has to be considered

4. Context

It is important to bear in mind the spatial and temporal conditions that will affect the functioning of the system

5. Trust

Transparency and accountability should not be seen as trade-offs or obstacles for an effective policing strategy.

6. Agency and Participation

Differing involvement levels available for citizenry may co-exist. However, undesired or involuntary involvement is strongly discouraged.

7. Safety

The involvement of citizens in security tasks has to be limited. Otherwise, disproportionate risks could be assumed by community members that are not prepared or legally entitled for certain high risk actions or involvements

8. Anonymity

The anonymous interaction through ICTs should not be perceived as a drawback. It could make people more willing to participate and collaborate

9. Social Media

Social media reutilization of other’s contents by both LEAs and citizens has to be carried out with precaution

10. Accessibility

Usability and simplicity of functions help to achieve the relevant goals as well as to compensate for the regulation complexity maintenance.

2022; Bragias et al., 2021; Clavell 2018; Dechesne 2019; Ernst et al., 2021; L’Hoiry et al., 2021; McKendrick 2019; National Analytics Solutions 2017; Neyroud and Disley 2008; Oswald 2019; Smith and Miller 2022; Urquhart and Miranda 2021; van ‘t Wald et al., 2021, Whittelstone 2019; Wilson and Kovac 2021; Williams 2020). Of these eight discuss the management of facial recognition technologies, while 13 propose recommendations and suggestions for the use of artificial intelligence technologies. None of the documents reviewed offer suggestions for voice recognition technologies, suggesting that this is another possible area for future research.

3.3.2.1: Facial Recognition Technologies

Recommendations pertaining to the use of facial recognition technologies focus on improving public support for the use of these technologies as well as the need to devise new ethical principles and guidelines for the use of these forms of technology. For example, Bragias et al. (2021) suggest that if police authorities and policy makers identify and address the specific concerns raised by members of the public and are transparent in their practices and educate the public about misinformation, trust, confidence, and support for the use of FRT by police may increase. Williams (2020) argues that trust in systems like facial recognition technologies and biometric identification systems which are predicated on human prejudicial biases and assumptions would increase if biases and limitations as to the efficiency of these systems were named and interrogated prior to development. Babuta and Oswald (2020) explain that the current lack of organisational guidelines or clear processes for scrutiny, regulation, and enforcement of biometric identification systems, including facial recognition technologies should be addressed as part of a new draft code of practice. This should specify clear responsibilities for policing bodies regarding scrutiny, regulation, and enforcement of these new standards. Similarly, Smith and Miller (2022) argue that clear, ethical principles and guidance should be implemented in a standardised manner to mediate the potential conflicts in relation to these technologies concerning security on one hand and individual privacy and autonomy on the other They also argue that these principles can be used to support appropriate law and regulation for the technology as it continues to develop. The National Physical Laboratory and Metropolitan Police Force (2020) recommend further trials to be conducted to explore the benefits and limitations of the use of facial recognition technologies in different policing activities and argue that careful consideration must be made using these technologies in defined spaces and that decisions should be carefully made as to where and how these technologies should be deployed. Chowdhury (2020) recommends a generational ban on the use of these technologies until further guidelines and legal stipulations have been developed and argues that mandatory equality impact assessments should be introduced. They also recommend the collection and reporting of ethnicity data and regular, independent audits, as well as the introduction of protections for minority groups.

3.3.2.2: Artificial Intelligence

Recommendations and suggestions from the existing research in relation to Artificial Intelligence technologies focus on three key themes: 1) minimising bias, especially bias towards marginalised communities, 2) establishing standards for predictive policing technologies, and 3) raising awareness via enhancing communications about algorithms to foster greater understanding of what these forms of technological applications involve.

For example, Alikhademi et al., (2022) discusses the use of Artificial Intelligence in predictive policing and how they can replicate the systemic bias of previous human decision-makers. They recommend that the pros and cons of the technology need to be evaluated holistically to determine whether, how and when these technologies should be used in policing. Similarly, Asaro (2019) argues that the adoption of AI technologies needs to be undertaken alongside educational processes designed to enhance critical understanding of the datasets it operates and the biases that these datasets may represent. From this, they recommend an AI Ethics of Care approach to minimise the risk of harm and improve perceptions of fairness. According to Asaro (2019: 44), an ethics of care approach takes a holistic view of the values and goals of systems designs and considers the interaction and interrelation between an intervention and the predicting of outcomes within specific contexts. The goals and values of the technology and its implementation should be of benefit to everyone (ibid). Whittlestone (2019) argue that high level principles can help to ensure that these technologies are developed in ways in that minimise the risk of bias against marginalised groups. They also recommend building consensus around their use within policing and with other institutions in ways that are culturally and ethically sensitive. They also suggest that the costs and benefits of the use of these technologies for marginalised groups should be weighed up prior to their implementation for specific purposes. In addition, they recommend building on existing public engagement efforts to understand the perspectives of different publics surrounding the use of these technologies, especially those from marginalised communities to inform decisions making about the implementations of these technologies (Whittletone 2019).

Five of the documents reviewed discuss the need for clear, ethical guidelines and laws to minimise the potential harms associated with the use of artificial intelligence technologies in policing and offer research-informed suggestions to this end (Almeida et al., 2021; Alikhademi et al., 2022; Asaro 2019; Whittelstone 2019; Urquhart and Miranda 2021).For example, Alikhademi et al., (2022) reviews the existing research focusing on the issue of fairness in relation to machine learning and artificial intelligence in predictive policing to develop a set of recommendations for fair predictive policing to minimise the risk of racial bias (see example 2) Whittlestone (2019) recommend the development of a set of shared concepts and terminology to develop an ethics of algorithms and AI and suggest that further research is needed to explore the ambiguity on commonly used terms from which to build consensus for use in ways that are culturally and ethically sensitive. In addition, they also recommend the building of a more rigorous evidence base for the discussion of social and ethical issues surrounding the use of AI in policing. Finally, Hobson et al., (2021) argue that greater awareness and exposure to the successful use of algorithms through trials can help to enhance the general acceptability of these technologies.

3.3.3: Surveillance Technologies and Tracking Devices

Fourteen of the documents from the research and policy-relevant literature reviewed offer specific recommendations, guidelines, and suggestions drawn from empirical research directly examining police practice for improving the use of surveillance technologies and tracking device technologies in policing (Aston et al., 2022; Laufs and Borrion 2021; Koper et al., 2019; Hendrix et al., 2019; Lum et al., 2019; White et al., 2018; Gramagila and Phillips 2018; Miranda 2022; Murphy and Estcourt 2020; Todak et al., 2018; Smykla et al., 2016; Brookman and Jones 2020; Clavell et al., 2018; Asaro 2019). Of these two make recommendations for the use of technology in hot spot analysis, 2 provide recommendations for improving the use of CCTV and visual optical technologies, 1 makes recommendations for the design and implementation of autonomous robots, while 7 provide recommendations for the adoption and dissemination of body worn cameras in policing practice. None of the documents reviewed provide recommendations or examples of good practice in the implementation of drones.

Example 2 :Evidenced-Based Recommendations for Best Practice:

Recommendations for Improving Predictive Policing to Minimise Racial Bias (Alikhademi et al., 2021)

1. Pre-processing of data

Datasets should prevent discrimination by reducing the output’s dependence on variables that have been identified as discriminatory

2. Algorithm design

Use of counterfactual analysis processes to detect and correct bias

3. Post-processing of data

Use of Lohia’s method in the post-processing of the results of an algorithm to make them respect group and individual fairness.

4. Analysis of results

Use of statistical measures to evaluate the fairness of outcomes for groups.

3.3.3.1: Location and ‘Hot spot’ Analysis Technologies

Two articles provide recommendations for the use of hot spot location analysis technologies (Koper et al., 2019; Hendrix et al., 2019). Koper et al., (2019) argue that greater training and emphasis on strategic uses of IT for problem-solving and crime prevention, and greater attention to the behavioural effects that these forms of technology may have on officers, might enhance its application for crime reduction. Hendrix et al., (2019) suggest that police institutions should develop a plan for how the use of these forms of technology fit within its operational goals and guiding philosophy to improve the correspondence between the adoption of these technologies and strategic goals.

3.3.3.2: Body Worn Cameras

Specific recommendations for improving the implementation of these technologies are made in seven of the documents focusing on the use of body worn cameras in policing (Lum et al., 2019; Gramagila and Phillips 2018; Miranda 2022; Murphy and Estcourt 2020; Todak et al., 2018; White et al., 2018; Smykla et al., 2016). Specific recommendations were made by Lum et al., (2019), who explain that to maximise the positive impacts of BWCs, police and researchers will need to give more attention to the ways and contexts (organizational and community) in which BWCs are most beneficial or harmful and will need to address how BWCs can be used in police training, management, and internal investigations to achieve more fundamental organizational changes with the long-term potential to improve police accountability and legitimacy in the community. White et al., (2018) draw on the findings from research in the US policing context to demonstrate that adherence to the U.S. DOJ BWC Implementation Guide can lead to high levels of integration and acceptance among key stakeholders. From research in both the US and Australia, Murphy and Estcourt (2020) recommend that the public should be involved in the formulation of police guidelines concerning the use of these technologies in order to democratise the rules around body-worn cameras and reduce controversy regarding their implementation. Todak et al., (2018) drawn on research evidence to reveal how decisions to implement BWCs carry unique consequences for external stakeholders and recommend for a comprehensive planning process that takes into account the views of all stakeholders to be implemented prior to rollout. Finally, Smykla et al. (2016) discusses the impact that the media play on the acceptance of this form of technology and recommend for the potential impacts of BWCs on safety, privacy, and police effectiveness to be assessed prior to deployment.

3.3.3.3: Autonomous Security Robots

One article presents important recommendations and considerations regarding the use of autonomous robotic devices in policing. Asaro (2019) considers the serious challenges in automating violence and suggests that at the very least, strict ethical codes and laws pertaining to the use of these technologies need to be developed. However, given the level of harm that these devices can pose, Asaro (2019) also recommends for these to be banned in police and security practices.

3.3.3.4: CCTV and Visual/Optic Technologies

Four articles offer suggestions for improving the use of CCTV and visual/optic technologies in certain aspects of policing practice (Brookman and Jones 2020; Clavell et al., 2018, Aston et al., 2002; Laufs and Borrion, 2021). Brookman and Jones (2020) recommend the need to introduce and refine clear standards and principles concerning the use of these technologies in forensic investigations, while Clavell et al., (2018) suggest that the positive and negative external factors at play at the intersection between technology, society and urban management need to be explored to help manage expectations concerning these technologies..

3.3.4: Recommendations from Research for Best Practice in the Development and Application of Ethical Frameworks and Scientific Standards in Relation to Emerging Technology

Ten of the documents reviewed present recommendations drawn from research for the development and application of ethical frameworks and scientific standards in relation to emerging technologies in policing (Almeida et al., 2021; Laufs and Borrion 2021; Aston et al., 2021; Oswald 2019; Ernst et al., 2021; Whittelstone 2019; Strom 2017; Dechesne 2019; Bradford et al., 2022, and Aston et al., 2022). Of these, one draws on research evidence in relation to electronic databases (Aston et al., 2021), seven in relation to biometric identification systems and AI (Almeida et al., 2021; Ernst et al., 2021; Oswald 2019; Whittelstone et al., 2019; Strom 2017; Bradford et al., 2022; Dechesne 2019) and two in relation to surveillance and monitoring technologies (Laufs and Borrion 2021 and Aston et al., 2022).

3.3.4.1: Electronic Databases

One of the articles (Aston et al., 2021) looks at evidence from research with members of the public in 9 European countries to make recommendations for developing and applying ethical standards in relation to the sharing of data in community policing applications. They argue that community policing models, data protection and security procedures can help enhance public confidence in relation to information sharing. They argue that demonstrating enhanced data security through improvements to systems, data storage, protection and procedures will help to improve information sharing. None of the documents focus specifically on scientific standards for electronic databases.

3.3.4.2: Biometric Identification Systems and AI

Of the seven documents that draw on evidence from research for the development and application of ethical frameworks and scientific standards in relation to biometric identification technologies and artificial intelligence, six focus on the development and application of ethical standards (Almeida et al., 2021; Oswald et al., 2019; Whittelstone 2019; Strom 2017; Dechesne 2019; Bradford et al. 2017). Three of the documents discuss scientific standards (Ernst et al., 2021; Oswald 2019; Strom 2017).

Of the six that focus on the development and application of ethical frameworks, one uses evidence from research for the development of ethical standards specifically in relation to facial recognition technologies (Almeida et al., 2021) by looking at evidence from the UK, US and EU concerning the use and misuse of facial recognition technologies. From this, they recommend that there needs to be better checks and balances for individuals and societal needs, greater accountability through improved transparency, regulation, audit, and explanation of facial recognition technology use., and the use of data protection impact assessments and human rights assessments. They also pose 10 ethical questions that need to be considered for the ethical development, procurement, rollout, and use of facial recognition technologies for law enforcement purposes (see example 3).

The other five documents use evidence from research for improving best practice via the development and application of ethical guidelines in relation to artificial intelligence (Strom 2017, Whittelstone et al., 2019; Dechesne 2019; Bradford et al., 2019; Oswald 2019). For example, Whittelstone et al., (2019) explores the applicability of various sets of published prescriptive principles and codes used to guide the development and use of these technologies. For example, they explore the Asilomar AI Principles developed in 2017 which outline guidelines on how research should be conducted and list the ethics and values that AI must respect. They also explore the Partnership on AI which has established a set of criteria for guiding the development and use of AI and which technology companies should uphold. They also discuss the five principles from the House of Lords Select Committee on Artificial Intelligence and the cross-sector AI code, as well as the Global Initiative on Ethics of Automous and Intelligence Systems’ set of principles for guiding ethical governance of these technologies. By performing a data frequency analysis, Whittelstone et al., (2019) found that there are substantial overlaps between the different sets of principles revealing agreement regarding that these technologies should be used for the common good and should not harm people’s rights or shared values such as fairness, privacy, and autonomy.

Example 3: Ten critical ethical questions that need to be considered for the ethical development, procurement, rollout, and use of Facial Recognition Technologies (Almeida et al., 2021)

1. Who should control the development, purchase, and testing of FRT systems ensuring the proper management and processes to challenge bias?

2. For what purposes and in what contexts is it acceptable to use FRT to capture individuals’ images?

3. What specific consents, notices and checks and balances should be in place for fairness and transparency for these purposes?

4. On what basis should facial data banks be built and used in relation to which purposes?

5. What specific consents, notices and checks and balances should be in place for fairness and transparency for data bank accrual and use and what should not be allowable in terms of data scraping, etc.?

6. What are the limitations of FRT performance capabilities for different purposes taking into consideration the design context?

7. What accountability should be in place for different usages?

8. How can this accountability be explicitly exercised, explained and audited for a range of stakeholder needs?

9. How are complaint and challenge processes enabled and afforded to all?

10. Can counter-AI initiatives be conducted to challenge and test law enforcement and audit systems?

Oswald (2019) draws on lessons learnt from the UK from the West Midlands data ethics mode to recommend a three-pillar approach to achieving trustworthy and accountable use of AI. Oswald (2019) suggests that lessons can be learned specifically in relation to: i) the contribution to effective accountability in respect of ongoing data analytics projects; ii) the importance of the legal and scientific aspects of the interdisciplinary analysis; and iii) the role of necessity and the human rights framework in guiding the committee’s ethical discussion. Oswald argues that a three-pillar approach could contribute to achieving trustworthy and accountable use of emerging technologies in UK policing via governing law plus guidance and policy interpreted for the relevant context, ethical standards attached to personal responsibility and scientific standards, and a commitment to accountability at all levels. However, it is noted that more specific information is required in relation to the application of relevant law to the deployment of emerging technologies. In research focusing on the European context, Dechesne (2019) draws upon evidence from research and lessons learnt from police practice in the Netherlands to develop a set of Recommendations for the Responsible use of AI and to ensure alignment with ethical principles applicable in the Netherlands and EU (see example 4).

Example 4: Recommendations for the responsible use of AI to ensure alignment with ethical principles in the Netherlands and the EU (Dechesne 2019)

Recommendation 1: Create an AI review board within the organization and consider appointing an “AI Ombudsperson” to ensure independent critical evaluation of the use of AI within the organization.

Recommendation 2: Update the “Code of Ethics” in the organization to include considerations particularly important for AI scientists and/or develop clear ethics guidelines for AI scientists working in the organization.

Recommendation 3: Support and incentivize the inclusion of ethical, legal and social considerations in AI research projects.

Recommendation 4: Train AI scientists continually to raise awareness about the ethical considerations and keep them up-to-date on the recent developments in AI and insights about their ethical impact.

Recommendation 5: Develop the redress process for a wrong or grievance caused by AI systems (e.g. an official apology, compensation, etc.).

Recommendation 6: Put clear and fair processes in place for assessing accountability and responsibility for the results of an AI system.

Recommendation 7: Install evaluation procedures for the development and use of AI systems that include ethical evaluation

Recommendation 8: Develop auditing mechanisms for AI systems to identify unintended effects such as bias.

Recommendation 9: Develop and deploy AI systems taking into consideration that errors will occur. Assess the error tolerance and acceptability in the envisioned task domain, and put in place measures to prevent, detect and mitigate errors

Recommendation 10: Ensure that used AI systems are sufficiently transparent to enable accountability, usage in courtrooms and the enhancement of trust from the public.

Recommendation 11: Respect the privacy of individuals. Don’t gather more data than needed, store it securely, and realize that anonymization is an imperfect protection.

Recommendation 12: Ensure that users of AI retain a sense of human agency and feel empowered by the system rather than marginalized.

Three of the documents focusing on biometric identification systems and AI draw on research evidence and evidence from police practice to make recommendations for best practice concerning scientific standards for emerging technologies (Strom 2017; Oswald 2019; Ernst et al 2021). All three of these focus on scientific standards for artificial intelligence technologies. Ernst et al., (2021) examines the lessons learned from experimentation with various forms of innovative technology in the Netherlands National police and developed a series of recommendations concerning scientific standards based on the findings. In particular, they discuss how high-end technology requires specific support and facilitating services that need to be provided. They recommend that a strategic vision on technology and innovation should be developed and that support for the technology inside and outside the organisation needs to be implemented. Strom (2017) draws on evidence from research in the US to consider the value of establishing a national technology clearinghouse. They found that there is a need for technological guidance along with strategic guidance for the acquisition of new forms of technology. From this, they argue that a clearinghouse would assist in helping to avoid the purchase of technologies with a high probability of failure.

Oswald (2019) discusses the importance of scientific validity drawing on evidence from the West Midlands context in England to demonstrate the need to consider the statistical and scientific validity of the use of proposed technologies and to the assumptions and values built into the analysis. From this, Oswald (2019) recommends context-specific evaluation methodologies for statistical algorithms used by police forces which should include guidance on how confidence levels and error rates should be established, communicated and evaluated. This is because, at present, the development of policing algorithms is often not underpinned by robust empirical evidence regarding their scientific validity. As a result, claims of predictive accuracy are often misjudged or misinterpreted and makes it difficult to assess the actual impact of the technology in practice (ibid). It also explains that the ‘Most Serious Violence’ predictive model, proposed by the National Data Analytics Solution project had to be withdrawn due to concerns with statistical validity.

Oswald (2019) also discusses how other police forces in the UK have investigate predictive models, including the OxRec model developed by Oxford University, and which was trialled by Thames Valley Police. The OxRec model provides an interface for the calculation of individual risk levels. However, trials have shown that the tool has a low predictive accuracy at the individual level, meaning that it cannot be justified as ethical in terms of avoiding unnecessary harm. Oswald (2019) also recommends that a national ethics approach would require clear scientific standards that are written with the policing context in mind.

3.3.4.3: Surveillance Technologies and Tracking Devices

Two of the documents reviewed drew on evidence from research for recommendations for the development and application of ethical frameworks in relation to surveillance technologies (Aston et al., 2022; Laufs and Borrion 2021). However, neither of these focused specifically on the issue of scientific standards. For example, Laufs and Borrion (2021) drew on evidence from interviews conducted with policing professionals in London and highlighted the current lack of guidelines and evidence with regard to social acceptability. From this, they recommend that evaluation processes should be formalised and made more inclusive to ensure issues of ethics and social acceptability are not overshadowed by budgetary constraints and resource shortage.

Table 2 presents a summary of the findings of the recommendations for best practice for each type of technology.

Table 2 - Summary of Findings: Recommendations for Best Practice for Implementation and Dissemination of Emerging Technologies

Technology Type

Electronic Databases

Specific Technology

Data Sharing Platforms and Third-Party Data Sharing

Recommendations

  • Improved Organisational Integration between Parties

Technology Type

Electronic Databases

Specific Technology

Data Sharing Platforms and Third-Party Data Sharing

Recommendations

  • Greater Standardisation of Practice

Technology Type

Electronic Databases

Specific Technology

Data Sharing Platforms and Third-Party Data Sharing

Recommendations

  • Assessment of Best Practice

Technology Type

Electronic Databases

Specific Technology

Data Sharing Platforms and Third-Party Data Sharing

Recommendations

  • Clarity over Legal Obligations on Data Storage and Processing

Technology Type

Electronic Databases

Specific Technology

Data Sharing Platforms and Third-Party Data Sharing

Recommendations

  • Implementation of Risk Assessments

Technology Type

Electronic Databases

Specific Technology

Data Sharing Platforms and Third-Party Data Sharing

Recommendations

  • Use of Ethical Frameworks (inc. specifically for Big Data)

Technology Type

Electronic Databases

Specific Technology

Data Sharing Platforms and Third-Party Data Sharing

Recommendations

  • Creation of a Multi-Agency Safeguarding Hub

Technology Type

Electronic Databases

Specific Technology

Social Media Platforms

Recommendations

  • Greater Cooperation between Policy, Social Science and Technology Researchers in Ethical Guidelines Development

Technology Type

Electronic Databases

Specific Technology

Vulnerable Population Databases and Datasets

Recommendations

  • Implementation of an Ethics of Care Approach

Technology Type

Electronic Databases

Specific Technology

Vulnerable Population Databases and Datasets

Recommendations

  • Greater collaboration between Parties when Identifying Vulnerable Individuals

Technology Type

Electronic Databases

Specific Technology

Community Policing Apps

Recommendations

  • Application of Community Policing Models, Data Protection and Security Procedures

Technology Type

Electronic Databases

Specific Technology

Community Policing Apps

Recommendations

  • Implementation of Evidence-Based Ethical Guidelines

Technology Type

Biomedical Identification Systems

Specific Technology

Facial Recognition Technologies

Recommendations

  • Transparency over Biases and Limitations in Efficiency

Technology Type

Biomedical Identification Systems

Specific Technology

Facial Recognition Technologies

Recommendations

  • Organisational Guidelines and Codes of Practice with Clearly Outlined Responsibilities

Technology Type

Biomedical Identification Systems

Specific Technology

Facial Recognition Technologies

Recommendations

  • Standardisation of Implementation of Ethical Principles

Technology Type

Biomedical Identification Systems

Specific Technology

Facial Recognition Technologies

Recommendations

  • Audits on the Collection and Reporting of Ethnicity Data

Technology Type

Biomedical Identification Systems

Specific Technology

Artificial Intelligence

Recommendations

  • Holistic Evaluation to Determine When and How these Technologies Should be Applied

Technology Type

Biomedical Identification Systems

Specific Technology

Artificial Intelligence

Recommendations

  • Education to Enhance Critical Understanding of Inherent Biases

Technology Type

Biomedical Identification Systems

Specific Technology

Artificial Intelligence

Recommendations

  • Implementation of an AI Ethics of Care Approach

Technology Type

Biomedical Identification Systems

Specific Technology

Artificial Intelligence

Recommendations

  • Evaluation of Costs and Benefits for Marginalised Groups Prior to Implementation

Technology Type

Biomedical Identification Systems

Specific Technology

Artificial Intelligence

Recommendations

  • Development of Shared Concepts and Terminology for Development of An Ethics of Algorithms

Technology Type

Biomedical Identification Systems

Specific Technology

Artificial Intelligence

Recommendations

  • Development of Ethical Frameworks that include Use of Data Protection Impact Assessments and Human Rights Assessments

Technology Type

Biomedical Identification Systems

Specific Technology

Artificial Intelligence

Recommendations

  • Consideration of Almedia et al.’s (2021) 10 Ethical Questions

Technology Type

Biomedical Identification Systems

Specific Technology

Artificial Intelligence

Recommendations

  • Application of Dechesne’s (2019) Recommendations to Ensure Alignment with Ethical Principles

Technology Type

Biomedical Identification Systems

Specific Technology

Artificial Intelligence

Recommendations

  • Specific Guidance for the Acquisition of New Technologies

Technology Type

Biomedical Identification Systems

Specific Technology

Artificial Intelligence

Recommendations

  • Establishment of a National Technology Clearinghouse

Technology Type

Biomedical Identification Systems

Specific Technology

Artificial Intelligence

Recommendations

  • Evaluation of Statistical Algorithms

Technology Type

Surveillance Technologies and Tracking Devices

Specific Technology

Location and Hot Spot Analysis Technologies

Recommendations

  • Attention to be Given to the Behavioural Effects of these Technologies

Technology Type

Surveillance Technologies and Tracking Devices

Specific Technology

Body Worn Cameras

Recommendations

  • Use of Technologies to Align with Strategic Goals

Technology Type

Surveillance Technologies and Tracking Devices

Specific Technology

Body Worn Cameras

Recommendations

  • Development and Adherence to Implementation Guidelines

Technology Type

Surveillance Technologies and Tracking Devices

Specific Technology

Body Worn Cameras

Recommendations

  • Comprehensive Planning Processes that Consider the Views of all Stakeholders

Technology Type

Surveillance Technologies and Tracking Devices

Specific Technology

Autonomous Security Robots

Recommendations

  • Development of Strict Ethical Codes and Laws

Technology Type

Surveillance Technologies and Tracking Devices

Specific Technology

CCTV & Visual/Optical Technologies

Recommendations

  • Clear Standards and Principles Regarding the Use of Technology in Forensic Investigation

Technology Type

Surveillance Technologies and Tracking Devices

Specific Technology

CCTV & Visual/Optical Technologies

Recommendations

  • Consideration of External Social Factors to Manage Expectations

Technology Type

Surveillance Technologies and Tracking Devices

Specific Technology

CCTV & Visual/Optical Technologies

Recommendations

  • Formalised Evaluation Processes

3.4: Recommendations and Lessons Learnt from Research and from the Trials, Adoption and Dissemination of Similar Types of Emerging Technologies in the Health and Children and Family Sectors

The findings from the supplementary review of the academic concerning the use of emergent technologies (electronic databases, biometric information systems and Artificial Intelligence systems, and electronic surveillance and tracking technologies) within the Health Care sector and the Children and Families sector offer a number of recommendations that might be helpful for informing best practice in the implementation and dissemination number of these technologies in policing.

3.4.1: Electronic Databases

Four of the 26 articles within this sample provide valuable lessons and recommendations that may be used to help influence the adoption and dissemination of emerging electronic database technologies in policing in ways that may help to prevent potential problems from occurring.

Facca et al., (2020) examined the ethical issues associated with digital data and its use in relation to minors within the health sector. The issues that emerged concerned consent, data handling, minors’ data rights, private versus public conceptualizations of data generated

through social media, and gatekeeping. Furthermore, they suggest that the uncertainty concerning the ethics involving minors and digital technology may lead to preclusion of minors from otherwise important lines of research inquiry and that restricting this raises its own ethical challenges. From this, they recommend greater integrated (cross sectoral) discussion to co-produce guidelines or standards concerning ethical practice between researchers and minors as a mechanism to proceed with such research while addressing concerns around uncertainty. Although this research concerns the health sector, it raises an important issue worthy of consideration for the implementation of emerging technologies within policing – that of considering the use of electronic databases and data in relation to minors.

A study conducted by Schwarz et al., (2021) explored the effects of sharing electronic health records with people affected by mental health conditions and highlighted several ethical and practical challenges that require further exploration. They found that access to information about themselves was associated with empowerment and helped increase patient trust in health professionals. However, negative experiences resulted from inaccurate notes, disrespectful language use, or the uncovering of undiscussed diagnoses. From this, they recommended the development of guidelines and trainings to better prepare both service users and professionals on how to write and read notes. This can be regarded as an important consideration for the implementation of emerging technologies in policing as standards will need to be set regarding subject access to records held about them. In addition, it provides a cautionary tale about the types of language and content to be avoided when entering data and highlights the need for guidance to be provided for those entering data and for those accessing the data in order to prevent harm. Similarly, Sleigh and Vavena (2021) discuss the ethics concerning the collection, sharing, and analysing of personal data in biomedical settings and highlight the need for proactive public engagement to enhance transparency to build public trust, which again provides an important consideration for the use of electronic data in policing practice. Finally, Birchley et al., (2017) study of the ethical issues involved in developing smart-home health technologies provides some important considerations that may be transferable to the policing context. They describe how one of the key concerns arose over the privacy of the data held, as well as its use, which manifested in emotive concerns about being monitored, arguing that the provision of clear information about the sharing of data with third parties can help to remedy concerns.

3.4.2: Biometric Identification Systems and Artificial Intelligence

While none of the documents from the supplementary systematic review offered suggestions that may be helpful for the implementation of facial or voice recognition technologies, a small number offered potentially transferable suggestions from the rollout of artificial intelligence technologies (5 out of 26 documents).

For example, Aicardi et al., (2018) provides a practice-based, self-reflexive assessment of the use of AI in health research to guide policy makers and communities who engage with these technologies and these issues. Similarly, Blease et al., (2019) examine the use of AI in UK General Practitioner Health Care to help assess the potential impact of future technology on key tasks in primary care to pre-empt likely social and ethical concerns, which they recommend should be carried out with professionals working in fields that are seeking to adopt these technologies. Ronquillo et al., (2021) developed a consensus paper on the central points of an international invitational think-tank on nursing and artificial intelligence (AI) and identified priorities for action, opportunities, and recommendations to address existing concerns and challenges. The specific challenges they identified as priorities for consideration were that: (a) professionals need to understand the relationship between the data they collect and AI technologies they use; (b) they must be meaningfully involved in all stages of AI: from development to implementation; and (c) limitations in the knowledge regarding the potential for professionals to contribute to the development of AI technologies should be addressed. They argue that the nursing profession should be more involved in the conversations surrounding AI given the significant impact that it will have on nursing practice and suggest that action must be undertaken to ensure that professionals understand the relationship between the data they collect and AI technologies they use. These present important lessons and considerations for thinking about when planning the implementation and dissemination of emerging technologies in police practice.

3.4.3: Surveillance Technologies and Tracking Devices

Two articles from the supplementary search and review process offer lessons and recommendations from research and practice in the Health and Children and Families’ sectors concerning the use of these technologies that are worthy of consideration for informing the implementation of this type of technology in policing practice (Birchley et al., 2017; Zhu et al., 2021). However, these only concern the use of smart devices and sensors, with none of the documents reviewed during this part of the research process offering suggestions specifically for the other types of surveillance technologies discussed in this report. Birchley et al., (2017) reveal that public concerns around the use of these devices in health care settings revolve mostly around the issue of privacy and suggest greater consideration needs to be made with regard to the issue of privacy in the implementation of these technologies. Zhu et al., (2021) also explore the issues concerning privacy and security and argue that professionals should be involved in the design and implementation of these technologies to help promote ethical awareness and practice.

3.4.4: The Use of Research Evidence for Best Practice in the Health and Children and Families Sectors in the Development and Application of Ethical Frameworks and Scientific Standards: Considerations for Policing

Two of the documents reviewed present suggestions from research in the health and children and families sectors (and general public sector) that may be helpful for taking into consideration when thinking about the development and application of ethical frameworks and scientific standards for policing (Leslie 2019; Fukuda-Parr and Gibbons (2021). Both these documents discuss these issues specifically in relation to artificial intelligence. For example, Fukuda-Parr and Gibbons (2021) discuss how voluntary guidelines on ethical practices issued by governments and other professional organisations attempt to create a consensus on core standards and principles for ethical design, development, and deployment of artificial intelligence. However, these ethical frameworks can be regarded as weak in terms of standards for accountability, enforceability, and participation, and for their potential to address inequalities and discrimination (ibid). It is argued that this therefore exposes a need for governments to develop more rigorous standards grounded in international human rights frameworks that are capable for holding Big Tech to account. From this, the authors recommend that AI guidelines should be honest about their potential for widening socio-economic inequality and not just discrimination and that governance of AI design, development and deployment should be based on a robust human rights framework to protect the public interest from threats of harmful applications. Leslie (2019) provides an ethical platform for the responsible delivery of an AI project or trial in the public sector context that may be helpful for considering in relation to policing (see example 5).

Table 3 provides a summary of the recommendations from this body of literature focusing on the use of emerging technologies in other institutions that are transferrable to the policing context.

Example 5: Critical Components of an Ethically Permissible AI Project (Leslie 2019)

Considerations for AI Projects and Trials in Policing

An AI technology project can be considered to be ethically permissible by considering the impacts it may have on the wellbeing of affected stakeholders and communities and demonstrating:

The project is fair and non-discriminatory

This can be achieved by accounting for its potential to have discriminatory effects on individuals and social groups, by mitigating biases that may influence your model’s outputs, and by being aware of the issues surrounding fairness that come into play at every phase of the design and implementation pipeline.

The project is worthy of public trust

For this to be achieved, the safety, accuracy, reliability, security and robustness of its product must be guaranteed to the maximum possible extent

The project is justifiable

This requires prioritisation of the transparency of the process by which the model is designed and implemented, and the transparency and interpretability of its decisions and behaviours.

Table 3

Summary of Findings: Recommendations for Best Practice for Policing from the Literature Focusing on Emerging Technologies in Other Public Sector Organisations

Technology Type

Electronic Databases

Recommendations

  • Greater Integrated Discussion between Professionals involved in Cross-Sectoral Working to Co-Produce Guidelines and Standards Concerning Ethical Practice between Researchers, Professional Institution Personnel and Minors

Technology Type

Electronic Databases

Recommendations

  • Guidelines for Storing and Sharing Data about Minors

Technology Type

Electronic Databases

Recommendations

  • Guidelines for Storing and Sharing Data concerning Mental Health

Technology Type

Electronic Databases

Recommendations

  • Use of Shared Language for Data Input Processes

Technology Type

Electronic Databases

Recommendations

  • Ethical Standards Concerning the Collection, Storing and Sharing of Biomedical Data (e.g., DNA)

Technology Type

Biomedical Identification Systems

Recommendations

  • Use of Practice-Based Self-Reflexive Assessments

Technology Type

Biomedical Identification Systems

Recommendations

  • Education and Training for Professionals

Technology Type

Surveillance Technologies and Tracking Devices

Recommendations

  • Consideration of Privacy Issues

Technology Type

Surveillance Technologies and Tracking Devices

Recommendations

  • Involvement of Professionals in the Design and Implementation Process (those who will be using the technology)

Technology Type

Surveillance Technologies and Tracking Devices

Recommendations

  • Government Development of Rigorous Ethical Frameworks Ground upon International Human Rights Frameworks

Technology Type

Surveillance Technologies and Tracking Devices

Recommendations

  • Transparency over the Potential Risk of Widening Socio-Economic and Racial Inequalities

Technology Type

Surveillance Technologies and Tracking Devices

Recommendations

  • Implementation of Leslie’s (2019) Ethical Considerations for AI Research and Trials

3.5: Recommendations from the Analysis of Existing Legal Frameworks Concerning Emerging Technologies

The findings from the analysis of the existing legal frameworks also provides some important insights that need to be considered for the adoption and dissemination of emerging technologies in policing.

In particular, there are lessons that can be learned through an examination of the Information Commissioner’s enforcement actions as well as the common law. At this point in time, the ICO has taken a number of steps to raise concerns in relation to technological advances and how they relate to the lawful use of personal information in different public sector contexts.

In June 2021, the ICO issued their investigative report into mobile phone data extraction by Police Scotland. Concerns had been raised about the roll out of cyber kiosks, the collection of data and data analysis in Digital Forensic Hubs.[83] Cyber kiosks was a project established to allow devices to be used by a suitably trained operator, to access a range of digital devices, seeking to consider whether those devices contained material of evidential value. However, the cyber kiosk would not allow any of that data to be extracted or retained. Instead, such processes would have to be carried out by a Digital Forensic Hubs, which have the capacity to extract data. The roll out of cyber kiosks was met with resistance by a range of stakeholders such as civil society groups and the Justice Sub-Committee of the Scottish Parliament.[84]

Concerns primarily involved the lawful basis for processing and the transparency of the information provided to the public about that processing. While acknowledging that progress had been made over the course of the project’s development and implementation, the ICO made a number of recommendations that would ensure future projects of a similar nature would be better placed to comply with data protection law. Those recommendations are contained in the box below.

ICO Report: Mobile Phone Data Extraction, June 2021

Recommendation 1:

Police Scotland, the COPFS and the SPA should jointly assess and clarify their mutual relationships and respective roles under the Data Protection Act 2018 in relation to law enforcement processing associated with criminal investigation. They should use the findings of this assessment as the basis for the review and revision of the governance and relevant policy documentation around MPE.

Recommendation 2:

Police Scotland should ensure it has DPIAs in place that cover all of its MPE operations, in order to demonstrate it understands and appropriately addresses the information risks associated with this practice. Police Scotland should review and update such assessments prior to the procurement or roll-out of new hardware or software for MPE and processing, including any analytical capabilities. Where it identifies residual high risks associated with new processing, the force should undertake prior consultation with the ICO, as required under s65 of the DPA 2018.

Recommendation 3:

In order to provide assurance around the integrity of the data extraction processes, Police Scotland should accelerate its work to implement and maintain the standards set out in the Forensic Science Regulator’s codes of practice and conduct for forensic science providers and practitioners in the criminal justice system.

Recommendation 4:

Police Scotland should review and revise the information it provides to the public, including the range of documentation it publishes on its website and anything it provides directly to people during engagement. It should ensure that the documentation:

  • adequately covers all processing arising from MPE;
  • is consistent; and
  • provides unambiguous information on privacy and information rights.

When considering this recommendation, the force should engage with, and may wish to adapt to its circumstances, the work the National Police Chiefs’ Council Executive (NPCC) is undertaking in relation to digital processing notices as a response to recommendation 2 of the England and Wales report.

Recommendation 5: Police Scotland should review its data retention policy documentation and supplement it with materials to include:

  • alignment of regular review and deletion processes across all operational, analytical and forensic environments; and
  • processes to allow the separation and deletion of non-relevant material at the earliest opportunity, so that the force does not process it further and so officers cannot inappropriately access, review or disseminate the data.

Recommendation 6:

As far as legislative differences and devolved administration factors allow, Police Scotland should engage with work the UK Government, the NPCC and the College of Policing are undertaking. This work includes: • the statutory power and code of practice being introduced through the Police, Crime, Sentencing and Courts Bill; • police guidance on the considerations and processes involved in MPE; and • privacy information officers provide to people whose devices are taken for examination.

In February 2022, the Information Commissioner’s Office issued a reprimand to the Scottish Government and NHS National Services Scotland in relation to the NHS Scotland Covid Status App (Box 2 below). The aim of the app was to enable individuals to prove that they had received the vaccination in order that they could access services (where restrictions applied to unvaccinated individuals). This inevitably involved dealing with sensitive personal information relating to an individual’s health. The period of development, evaluation, and roll out were materially affected by the circumstances of the pandemic. The ICO had issued guidance during this period which detailed the key data protection expectations in developing such certification systems. However, the Scottish Government and the NHS National Health Services Scotland, did not appear to have taken on board that guidance. The ICO raised a number of concerns during the apps development and roll out.

ICO Reprimand: COVID Status App

Concerns were raised over:

  • 3rd party access to retain images provided by user (for verification purposes) to train proprietary facial recognition algorithms (Withdrawn prior to launch).
  • Misleading statements on the lawful basis of processing data (Article 5(1)(a) GDPR)
  • Lack of appropriate privacy notice (Article 12 & 13 GDPR)
  • Failure to comply with the transparency principle (Article 5(1)(a) GDPR)Reprimand issued in respect of:
  • Processing personal data, including sensitive data, in a manner that is unfair in breach of Article 5(1)(a) GDPR
  • Failing to provide clear information about the processing of personal data in breach of Article 12 GDPR

In both cases we can draw together the lessons that can be learned. It is critical to:

1. Map the relationship between those involved in the development and implementation of emerging technologies. This is critical to being able to determine roles and responsibilities in the protection of personal information. This is of particular significance when data is being shared between organisations and that data is transferred from the private sector to the public sector or vice versa.

2. Understand the nature of the data that is being processed and the scope of that processing. This will have a knock-on effect on the lawful basis of processing, the need for consent and in turn, the information that needs to be provided to the data subject. For example, if data is collected on the lawful basis that it is necessary for the performance of a specific task relating to the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties there would be an issue if that data is used to train a commercial algorithm, where consent should then be the lawful basis.

3. Undertake a comprehensive review of the above considerations before the deployment of technologies.

Beyond the UK, there is an established relationship of law enforcement and judicial cooperation with the EU.[85] Central to this relationship is the protection of personal data.[86] It is worth noting that while the UK is no longer bound by the Charter of Fundamental Rights, in the context of cross border cooperation, the Union and its member states continue to be bound.[87] The significance of this here is that there may be developments in the scope of the Article 8 Protection of Personal Data that will impact on if, and how, cross border cooperation takes place and in turn introduce compatibility issues in the operationalisation of emerging technologies that are dependent on the processing of personal data. For this reason, when considering the deployment of technologies in a context with high potential for a cross border dimension, compliance with the EU interpretation of Article 8 should be considered.

3.5.1: Electronic Databases

The existing case law offers guidance on various criteria that should be applied and factors that will be influential in determining compliance with data protection law and article 8. In many of these cases the key features of compliant use of databases are that they have appropriate policies in place that offer clarity on the circumstances in which data will be retained and the purposes for which it is used.[88] Conversely, those who have non-compliant databases demonstrate confusion over the roles and responsibilities of those processing data and a lack of appropriate governance to ensure compliance. For example, R (on the application of Catt) v Association of Chief Police Officers of England, Wales and Northern Ireland the Court made clear that “the rules in question did not need to be statutory, provided that they operated within a framework of law and that there were effective means of enforcing them”.[89]

3.5.2: Biometric Identification Systems

The trial use of automated facial recognition software was subject to detailed examination in the decision of R (on the application of Bridges) v Chief Constable of South Wales and the conclusions of the Court contain important considerations.[90] The Bridges decision provides important guidance that can be used to inform the development of policies and procedures in this area. It set out the following key considerations:

Key Guidance on the Use of Automated Facial Recognition Software in the UK

  • The more intrusive the act, the more precise and specific the law must be to justify it.
  • Data concerned is ‘sensitive personal data’ within the meaning of the DPA 2018
  • Data is processed in an automated way (and demands additional protection)
  • Policy required to limit the selection of individuals who would be included in ‘watch lists’ used by AFR
  • Policy required limit the selection of locations for deployment.
  • Public Authorities have a positive duty to take reasonable steps to make enquiries about the potential impact of AFR (across the protected characteristics) – to satisfy their equality duty (s149 Equality Act 2010):
    • These steps should include a before trial, during trial and after trial phase
    • Assessment of impacts should include a mechanism of independent verification

See: R (on the application of Bridges) v Chief Constable of South Wales [2020] EWCA Civ 1058)

However, it has become apparent that a critical feature of the development and use of biometric systems is the interactions between private entities and law enforcement. This was brought into sharp focus by the role out of Clearview AI’s facial recognition tool.

3.5.2.1: Clearview AI: A Comparative View

Clearview AI Inc provide a facial recognition tool that has been deployed by a number of police forces across the globe to conduct retrospective identification.[91]

The tool functions by carrying out four consecutive steps:

1. “scrapes” images of faces and associated data from publicly accessible online sources (including social media), and stores that information in its database.

2. creates biometric identifiers in the form of numerical representations for each image.

3. allows users to upload an image, which is then assessed against those biometric identifiers and matched to images in its database; and

4. provides a list of results, containing all matching images and metadata. If a user clicks on any of these results, they are directed to the original source page of the image.

However, the use of this tool has been challenged in several jurisdictions. This table sets out the issues raised in Canada, Australia, and the UK.

Comparative Regulation of Facial Recognition: Clearview AI Inc

General Concerns

Canada[92]

1. False, or misapplied matches could result in reputational damage including becoming a person of interest to law enforcement.

2. Affront to individuals’ privacy rights and broad-based harm inflicted on all members of society, who find themselves under continual mass surveillance by Clearview based on its indiscriminate scraping and processing of their facial images

3. Concerns in relation to the inclusion of images of children (and other vulnerable individuals).

Australia[93]

1. False, or misapplied matches could result in reputational damage including becoming a person of interest to law enforcement.

2. Affront to individuals’ privacy rights and broad-based harm inflicted on all members of society, who find themselves under continual mass surveillance by Clearview based on its indiscriminate scraping and processing of their facial images

3. Concerns in relation to the inclusion of images of children (and other vulnerable individuals).

UK[94]

1. Continued expansion of the database to include more UK citizens.

2. Processing is unfair because individuals are unaware that their personal data is being processed.

3. While initial statements were made about the scope of use being limited to law enforcement it has become apparent that they have offered their services to the Government of the Ukraine. This illustrates concerns as to the expansion of the service presenting escalated risks.

Grounds of specific legal challenges

Canada[92]

1. Failure to obtain the required consent[95]

2. Collection, use, and disclosure of personal information was neither appropriate or legitimate[96]

3. Failure to report the creation of a biometric database.[97]

Australia[93]

1. Failed to obtain the required consent.[98]

2. Failed to take reasonable steps to implement practices, procedures and systems relating to the entities functions and activities that ensures compliance with the Australian Privacy Principles.[99]

3. Failed to collect information by lawful and fair means.[100]

4. Failed to take steps to notify personal of the collection of information.[101]

5. Failed to take steps to ensure information it used or disclosed was accurate, up-to-date, and relevant. [102]

UK[94]

2. Failed to have a process in place to stop the data being retained indefinitely;

3. Failed to have a lawful reason for collecting the information;

4. Failing to meet the higher data protection standards required for biometric data (classed as ‘special category data’ under the GDPR and UK GDPR);

5. Failed to inform people in the UK about what is happening to their data; and

6. Asked for additional personal information, including photos, which may have acted as a disincentive to individuals who wish to object to their data being processed.[103]

The objections raised in the investigation into Clearview’s facial recognition tool did not expressly address the legality of the law enforcement use of the tool. However, the critical issue was that the foundation of the lawful collection of data was missing because Clearview scraped data in violation of various websites terms and conditions and did not obtain express consent for images to be used in the creation of the biometric database. This lack of a lawful basis is likely to impact on the legality of sharing this data with law enforcement and the lawfulness of their use of it.

Within the Canadian investigation they were keen to emphasis the potential discriminatory impact of facial recognition technologies, recognising that biometric data “is distinctive, unlikely to vary over time, difficult to change and largely unique to the individual.” Research by the National Institute of Standards and Technology was used to support this discussion as it demonstrated that the rate of false positives for Asian and Black individuals was often greater than that for Caucasians, by a factor of 10 to 100 times.[104]

Importantly, it was acknowledged that “facial biometric data is particularly sensitive given that it is a key to an individual’s identity, supporting the ability to identify and surveil individuals.”[105] With a similar concern raised in the New Zealand, Lynch and Chen highlight that FRT differs from other biometrics such as DNA and fingerprints because a person’s face is generally public, and its image can be collected without their knowledge.[106]

Focusing on the risk of harm, the OPC argued that Clearview’s system has the potential to result in misidentification leading to inappropriate police investigations.

In Australia and the UK, they have been criticised as well.[107] The Office of the Australian Information Commissioner and the UK Information Commissioner’s Office conducted their investigation jointly. The OAIC focused on compliance with the Privacy Act 1988 and the ICO focused on compliance with the Data Protection Act 2018 and the EU General Data Protection Regulation.

They acknowledged that the right to privacy was not as absolute right, but rather that an interference could be justified where the processing was necessary, legitimate, and proportionate. However, despite the declared use being for Law Enforcement purposes, the reality of their use went considerably further. Specific issue was taken with the fact that Clearview was a commercial enterprise and was obtaining a commercial advantage through the information’s use. In particular, the personal information was used to ‘train and improve [Clearview AI Inc’s] algorithm and monetize their technology’ and the data that they held.[108] In addition, the OAIC was concerned that as a whole the covert way in which Clearview conducted their collection and use of images meant that even if they were to comply in a technical sense with the production of privacy notices – it would be relatively meaningless because affected individuals would most likely not be aware of their practices and in turn do not know to look for the privacy notice.

In the UK ICO had initially announced an intention to fine Clearview AI £17 million. However, the ICO have now issued and enforcement notice and monetary fine of £7,552,800.[109] This fine took into consideration a number of factors that were considered to have mitigated the scope of harm to UK citizens. In particular, Clearview stopped offering their services in the UK. However, the ICO were critical of the fact that no steps had been taken to exclude the data of UK citizens from matches (conducted elsewhere) or to delete UK citizens data (except in relation to limited direct request for deletion). As a result, accompanying the money penalty, the ICO enforcement notice also requires that specific steps are taken by Clearview to protect the citizens of the UK. Those steps are listed in the box below.

Steps Required to be Taken by Clearview AI to comply with the ICO Enforcement Notice

1. Within six months following the date of the expiry of the appeal period, delete any personal data of data subjects resident in the UK that is held in the Clearview Database. Without prejudice to the generality of this requirement, Clearview are to delete any images of UK residents that are held in their database, and any other data associated with such images (including URLs and metadata).

2. Within three months following the date of the expiry of the appeal period, refrain from any further processing of the personal data of data subjects resident in the UK. Without prejudice to the generality of this requirement, Clearview must:

(a) Cease obtained or “scraping” any personal data about UK residents from the public facing internet;

(b) Refrain from adding personal data about UK residents to the Clearview Database; and

(c) Refrain from processing any Probe Images of UK residents, and in particular refrain from seeking to match such images against the Clearview Database.

3. Refrain from offering any service provided by way of the Clearview Database to any customer in the UK.

4. Not do anything in future that would come within paragraphs 1-3 above without first:

(a) carrying out a DPIA compliant with Article 35 35 UK GDPR, and

(b) providing a copy of that DPIA to the Commissioner

3.5.2.2: Good Practice in Facial Recognition

The introduction of Clearview AI Incs facial recognition tool has in many ways allowed there to be a meaningfully discussion of the potential impact of the collection and use of biometric data as well as the incorporation of automated decision making and artificial intelligence. Of particular concern in each jurisdiction has been recognition that although declaring an intention to support Law Enforcement in the prevention and investigation of crime, the reality was that it was a commercial entity collecting, using, and sharing information that was then subsequently being used for Law Enforcement purposes.

Those engaging in policing need to consider if, when, and how emerging technologies are positioned within the regulatory regimes applying to private actors and those applying to public actors (specifically Law Enforcement). This is important because, as the Council of Europe makes clear, such public/private relationships have the potential to blur roles and responsibilities leading to human rights violations.[110]

In 2020, the Council of Europe issued Guidelines on addressing the human rights impacts of algorithmic systems.[111] They propose that before investing in the development of algorithmic systems, states should ensure there will be in place “effective monitoring, assessment, review processes and redress for ensuing adverse effects or, where necessary, abandonment of processes that fail to meet human rights standards.”[112]Within those guidelines ““algorithmic systems” are understood as applications that, often using mathematical optimisation techniques, perform one or more tasks such as gathering, combining, cleaning, sorting, classifying and inferring data, as well as selection, prioritisation, the making of recommendations and decision making.”[113] Accordingly, the applicability of the guidelines is wide ranging and includes for example, facial recognition.

Significantly, the guidelines make clear that consideration should be given to the impact on human rights at every stage from proposal of an algorithm to is operationalisation.[114] Later in 2021 they produced specific guidelines on the use of facial recognition.[115] The virtual of these guidance lines is that they address public and private sector dimension as well as live and retroactive use. They make clear that ultimately “the necessity of the use of facial recognition technologies has to be assessed together with the proportionality to the purpose and the impact on the rights of the data subjects.”[116]

Importantly, they highlight that the legal framework should be in place that addresses each type of use and provides “a detailed explanation of the specific use and the purpose; - the minimum reliability and accuracy of the algorithm used; - the retention duration of the photos used; - the possibility of auditing these criteria; - the traceability of the process; - the safeguards”. [117] We are to some extent driving in the same direction as the Council since the Scottish Biometric Commissioner has a statutory duty to produce a code that will address the acquisition, retention, use and destruction of biometric data for criminal justice and police purposes.[118] That being said, there is much more that can be done to provide a supportive framework for the use of emerging technologies including biometric identification systems.

3.5.2.3: Good Practice in Emerging Technology: New Zealand Police

The New Zealand Police have undertaken a great deal of work to reflect on their policing practice in the context of deploying emerging technologies. They developed a New Technology Framework in 2021 that seeks to support decision making in the development of policy, procedures and process involved in the use of new technology in policing.[119] This framework is designed around three mechanisms that ensure a robust framework in implemented. These three mechanisms are a Trial or Adopt New Policing Technology Policy, a New Technology Working Group (of internal members) and an Expert Panel in Emergent Technology (external members).

The policy set out with the following purposes:

1. Ensure decisions to trial or adopt new and evolving policing technologies and technology-enabled capabilities are made ethically and proportionately with individual and community interests

2. Ensure Police’s approach aligns with wider New Zealand Government ethical standards and expectations, including the Government Chief Data Steward’s and Page Privacy Commissioner’s Principles for the safe and effective use of data and analytics and Statistics, and New Zealand’s Algorithm Charter for Aotearoa New Zealand

3. Ensure decisions reflect Police’s obligations to Te Tiriti o Waitangi including by seeking and taking account of a te ao Māori perspective

4. Enhance public trust and confidence by ensuring decisions and the reasons for them are transparent, and decision-makers are accountable

5. Enable Police to innovate safely, so that opportunities offered by technology to deliver safer communities and better policing outcomes for New Zealanders are not lost.

The development of the policy followed two significant evaluations one by the Law Foundation of New Zealand and the other commissioned by the New Zealand police focusing on the regulation of facial recognition technologies (Lynch et al, 2020: Lynch and Chen, 2021). These reports made a number of recommendations on how to develop an effective legal and ethical framework and while focused on one specific type of technology they raise issues that are likely to permeate developments in emerging technologies.

For example, it was highlighted that the more sensitive the information being processed by a piece of technology the greater protection needed. On a similar vein, the more sensitive the information the greater the need for specific legal structures to authorise that processing and to ensure the necessary reliability, transparency, and accountability. Indeed, the newly developed policy should go some way to addressing the concerns raised by Lynch et al.

As you can see from the framing of the policy purpose a central feature was the recognition of the relationship of the policy to its community and ensuring that all members of the community are represented. In the New Zealand context, this is a particularly important issue because they have a significant Māori population (Lynch & Chen, 2021, Lynch et al, 2020). Moreover, it has been widely recognised that one of the most persistent features of the use of emerging technology is the potential of their use to exacerbate discriminatory practice (Steege, 2021). By framing the purpose of the policy as set out above there is recognition that there is a need to address this issue on an ongoing basis.

The policy introduces 10 principles that are intended to guide decision making in the trial or adoption of new technology. Those principles are:

1. Necessity

2. Effectiveness

3. Lawfulness

The proposed use is lawful

4. Partnership

5. Fairness

6. Privacy

7. Security

8. Proportionality

9. Oversight and accountability

10. Transparency

The implementation of these principles is secured through a 5-step approval process that must be followed in the trial or adoption of either a new technology‐based policing capability, or a new use of an existing technology. This 5-step process is as follows:

1

  • Does the policy apply?
  • Consult initial assessment decision tree
  • Notify Emergent Technology Group in writing if policy does not apply

2

  • Develop proposal
  • Complete template is sent to Emergent Technology Group

3

  • Contact Emergent Technology Group
  • Guidance will be given on whether proposal is in scope and if further detail/amendment required.

4

  • Consider the proposal and develop Policy Risk Assessment (including other required assessment e.g. Privacy Impact Assessment etc)
  • 4(a) Contact other experts as required and make amendments to proposal as necessary

5

  • Step 1 Approval of Security Privacy Reference Group (SPRG can refer to Expert Panel on Emergent Technology or other key stakeholders for independent advice
  • Step 2 Endorsement of Decision by Organisational Capability Governance Group.

The advantage of this process is that it is clearly set out, making it transparent. There are formalised points in the process that allow for the development and refinement of a proposal. Importantly, there are various points at which the expertise of external contributors can be drawn upon and that can provide an independent evaluation of the technology's potential risks and benefits in a policing context. However, a limitation of the process as narrated in the policy is it not clear when an independent expert is needed and how such an expert should be identified and selected.

The Expert Panel in Emergent Technology’s terms of reference establish that its role is to provide external and independent expert scrutiny of, and advice on, Police proposals that involve emergent technology.[120] The Commissioner of the Police appoints the Chair of the panel and in consultation with them appoint the other members. While this may raise questions in terms of the truly independent composition of the panel, there is some degree of transparency secured in that their appointment and expertise is accessible on the New Zealand Police website.[121] The terms of reference do require that a declaration of interest is made but it is at the Chair discretion how that impacts on the individual’s participation. Therefore, the visibility of membership on the New Zealand Police website is an important aspect of ensuring accountability. Without such mechanisms there may be the potential for commercial entities to gain influence in the development of technological innovation (or indeed to repress development). Further transparency is embedded in that the terms of reference expressly state that there is presumption that their advice will be made publicly available. However, as yet there is limited evidence of this.

One of the greatest attributes of the New Zealand police’s approach is the accessibility of its documentation. The framework and policy documents are easily accessible on its website and as such is available to any citizen who wishes to access it.[122] It is written in a clear language that is easy to follow. However, as the policy has only been recently established there is little evidence of how the policy is being implemented and to what extent it is meaningfully incorporated into policing practice.

Within the operation of this framework, there is explicit acknowledgement that where a piece of technology relies substantively on an algorithm consideration should be given to the NZ Police Guidelines for algorithm life‐cycle management. This is an important mechanism through which the NZ Police can ensure that they comply with their obligations in terms of the Algorithm Charter.[123] However, it should be noted that this Charter is of a voluntary nature and lacks a mechanism through which compliance can be monitored (Bennett-Moses et al, 2021).

3.5.3: Surveillance and Tracking Devices

In Canada they do have a binding framework. The Canadian Directive requires that regulated entities undertake an algorithmic impact assessment prior to adopting systems dependent on them.[124] The tool that they use is open source so you can really see how it works.[125] However, in both the case of the UK guidance and the Canadian provisions the focus is on Government/Public Sector generally as opposed to policing specifically. For this reason, should a compulsory algorithmic impact assessment be considered it would need to be tailored to the policing context.

This is important because algorithms have the potential to mask, exacerbate and escalate human biases and there is currently no minimum scientific or ethical standards an AI tool must meet before it can be used in the criminal justice system.[126]” It has been suggested by the Justice and Home Affairs Committee that the solution is to establish a national body who can engage in the process of creating such standard.[127] Indeed, since the time of their report there has been some progress on this front with the Alan Turning Institute leading a project seeking to draft global technical standards.[128]

Contact

Email: ryan.paterson@gov.scot

Back to top