Independent review – Independent advisory group on new and emerging technologies in policing: final report

The final report of the Independent advisory group on new and emerging technologies in policing.


4. Legal frameworks

This chapter covers legal frameworks (human rights, equalities, data protection and law of evidence); impacts on individuals' rights, processes (including impact assessments) and procedures (including digital evidence gathering); lessons learned, good practices and legislative gaps. It is based on both the report of the first workstream of the IAG (Daly et al. 2023) and the work of the commissioned research report (Connon et al., 2023). As mentioned above, our analysis is based on current legislative frameworks in force at the time of writing and excludes consideration of UK Government proposals for legislative reform, in particular the Data Protection and Digital Information Bill and Bill of Rights Bill. We consider EU data protection standards contained both in the GDPR and Law Enforcement Directive, as currently implemented in UK law, to be best practice and ought to continue to be adhered to by authorities in Scotland even if they are no longer legislative requirements.

Background

As outlined by Daly et al. (2023), with the rapid acceleration of technological advances over the past 10 years policing bodies in Scotland have expressed a need to increase their technological capabilities in order to fulfil their statutory role of prevention, detection and apprehension of crime. Over the past five years Police Scotland have made a significant commitment in Policing 2026and its implementation plan, to engage in digital policing and focus on their approach to cybercrime, developed in their Cyber Strategy. This has resulted in greater engagement with technology internally and significant external engagement with stakeholders and regulatory bodies. During this period Police Scotland have been challenged and criticised by parliamentary committees, national human rights institutions, statutory inspection bodies and stakeholders with regard to its implementation of human rights standards to guide policing. It is in this context that the IAG was formed and recently we have seen the establishment of the independent Scottish Biometrics Commissioner (SBC), via the Scottish Biometrics Commissioner Act 2020, whose general function is to support and promote the adoption of lawful, effective, and ethical practices regarding biometric data for criminal justice and police purposes.

Legal frameworks:

Connon et al. (2023) detail various legal considerations associated with the adoption of emerging technologies in policing. In their report they have identified relevant UK case law (in Appendix 3), International C case law (Appendix 4) and key provisions of significant legislation (and the technologies to which they may apply, Appendix 5). The latter also cites relevant case law addressing each legislative provision and is a useful tool against which to evaluate the legal issues/considerations presented by a specific piece of emerging technology. Connon et al.'s work on law of evidence is covered later but firstly information from Daly et al.'s (2023) report is used to outline Human Rights, Equalities, Data Protection and Biometrics considerations, supplemented by information from Connon et al.'s report.

Human rights:

As covered by Daly et al. (2023), as part of the UK, which is a signatory of the European Convention on Human Rights (ECHR), and as per the Scotland Act 1998, Scottish public entities bear the primary duty to promote, protect and fulfil human rights as specified. States have a positive obligation to protect against discrimination and promote equality. The ECHR has some domestic effect in the UK via the Human Rights Act 1998 and section 6 makes it unlawful for a public authority to act (or fail to act) in a way which is incompatible with a Convention right. Scottish Government and policing bodies should place human rights at the core of how emerging technologies are used in policing. Key ECHR rights engaged by the use of emerging technologies are listed (Daly et al., 2023: 16) but include e.g. right to liberty and security; respect for private and family life, home and correspondence; freedom of expression; freedom of assembly and association; protection from discrimination in respect of these rights and freedoms.

Various pieces of domestic legislation are also relevant to implementing these rights, including the protections against discrimination and the Public Sector Equality Duty in the Equality Act 2010. The impact and the right affected is dependent on how new technologies are designed; the purpose and context in which they are used; and the safeguards and oversight systems in place. There is an emerging body of human rights jurisprudence on the development and use of digital technologies, emphasising their need to be centred within a human rights framework, which means considering cross-cutting human rights principles such as transparency, non-discrimination, accountability and respect for human dignity. It is also crucial that the private sector meets its due diligence obligations to ensure protection of human rights. Human rights are in place to guard against the risks of misuse and mishandling as well as providing effective remedy. Human Rights impacts explored by Connon et al. (2023) are covered below.

Equality Act 2010:

As explored by Daly et al. (2023), The Equality and Human Rights Commission published guidance on the Equality Act 2010 (EA 2010), which protects individuals from discrimination, victimisation and harassment because of protected characteristics (age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and sexual orientation) and supports progress on equality. The EA 2010 prohibits direct and indirect discrimination, discrimination arising from disability, failure to make reasonable adjustments for disabled people, harassment and victimisation.

The Public Sector Equality Duty (PSED): is made up of the general duty and specific duties. The general duty requires public bodies to have due regard to the need to eliminate discrimination, advance equality of opportunity and foster good relations between different groups when carrying out their activities (see section 149 of EA 2010). Not ensuring consideration of equality can lead to unlawful discrimination, greater inequality and worse outcomes for particular groups of people in our communities. The general duty requires equality considerations be built into the design of policies and practices and the delivery of services, and for these to be kept under review. The EHRC has issued guidance on the PSED for Scottish public bodies.

Scottish Minsters, Police Scotland and the Scottish Police Authority have legal obligations under PSED as service providers and employers. Of note when considering the adoption and application of new technologies is the specific duty requirement to assess the equality impact of proposed and revised policies and practices (regulation 5 of EA 2020 (Specific Duties) (Scotland) Regulations 2012 as amended). Chapter 6 of the technical guidance on the Public Sector Equality Duty: Scotland describes what's required from public bodies in carrying out an equality impact assessment. It sets out a number of steps including assessing the potential impact by considering whether the equality evidence indicates potential differential impact on each protected characteristic group or provides an opportunity to improve equality in an area; taking account of results in developing proposals and ensuring due regard when making decisions about the policy and its implementation, documenting decisions, publishing results and monitoring the actual impact of the policy.

There is also a specific duty requirement to consider the use of equality award criteria and conditions in relation to public procurement and the EHRC has published procurement guidance for Scottish public authorities in order to assist with compliance. On a practical level Police Scotland need to make sure they have systems and processes in place to gather and use equality data of employees in order to meet requirements of EA 2020 (Specific Duties) (Scotland) Regs 2012 (as amended); and collate and use equality data of service users as a means of demonstrating compliance with section 149 of EA 2010. EHRC have published guidance on Artificial intelligence: meeting the Public Sector Equality Duty (PSED).[1]

Connon et al. (2023) point out that where emerging technology is to be deployed in a context involving children, steps will have to be taken to ensure compliance with the United Nations Convention on the rights of the Child. They argue that the implementation of UNCRC into domestic law has consequences and further research is required to explore how children's rights may be affected by the implementation of emerging technologies in the context of policing and how they can be appropriately secured.

Data protection:

As described by Daly et al. (2023), UK law at the time of writing reflects EU standards in data protection ('UK GDPR'), given the Data Protection Act 2018, which implement recent reforms to EU law in this area including the General Data Protection Regulation and Law enforcement Directive. Therefore, law enforcement authorities in Scotland are subject to UK data protection law which incorporates UK GDPR and the Data Protection Act (DPA 2018). Which data protection regime applies depends on the primary purpose of the processing and nature of the body carrying it out.

Part 3 of the DPA 2018 applies to competent authorities processing personal data for law enforcement purposes. Law enforcement purposes are defined under section 31 of the DPA 2018. The main responsibilities for authorities processing personal data of law enforcement purposes are set out in chapter 2, Part 3 of the DPA 2018, and for general processing, in Article 5 of the UK GDPR. Data protection law is regulated by the UK Information Commissioner's Office (ICO).

The legislation expressly prohibits the processing of data for non-law enforcement purposes unless it is authorised by law. As Connon et al. (2023) point out police are often engaged in activities considerably beyond the definition of law enforcement purposes and therefore Police Scotland would need to ensure there is an appropriate authorisation in law for processing. This is likely to be important in the interaction between private sector organisations who may be involved in developing technologies or providing services as there may be restrictions e.g. on the use of data for development of technology or the sharing of data with third parties.

Sensitive data/special category data:

Both data protection regimes provide additional protections for what is known as sensitive personal data or special category data. . Sensitive processing is defined in section 35(8) of the DPA 2018 is defined in section 35(8) of the DPA 2018 and special category data is defined in Article 9 UK GDPR. Both sensitive processing and special category data includes biometric data where used for the purpose of uniquely identifying an individual.

When undertaking 'sensitive processing' in order to comply with the first principle the processing must be based either on the consent of the data subject or policing bodies must be able to demonstrate that the processing is strictly necessary for law enforcement purpose and based on a Schedule 8 DPA 2018 condition. The standards for valid consent (ICO website definition) under data protection law are high and difficult to obtain in practice, therefore in most cases competent authorities (ICO website definition) processing sensitive data must be able to demonstrate that the processing is strictly necessary (relates to a pressing need and cannot reasonably be achieved through less intrusive means) and be able to satisfy one of the conditions (ICO website definition) in Schedule 8 of the DPA 2018. Competent authorities also need to ensure there is an appropriate policy document (ICO website definition) in place.

Data protection impact assessment:

As outlined by Daly et al. (2023), DPIA (ICO website definition) must be carried out by controllers before they process personal data, when the processing is likely to result in a high risk to the rights and freedoms of individuals. Processing that is likely to result in a high risk includes for example: systematic and extensive processing activities (including profiling) and where decisions that have legal effects or significant effects on individuals; large scale processing of special categories of data or personal data relating to criminal convictions or offences; using new technologies (e.g. surveillance systems).

It is highly likely that a DPIA will be required for proposals that involve the use of new technologies by competent authorities. Even where a DPIA is not required by law it is good practice to carry out a DPIA for all new processing and an effective DPIA will allow controllers to identify and fix problems at an early stage.

When undertaking a DPIA, the ICO recommends that controllers refer to its Overview of Data Protection Harms and the ICO's Taxonomy to help them identify possible harms that may arise from plans that are being considered. As Connon et al. (2023) argue, and as the ICO recommends in its Guidance, DPIAs and data protection policies should be kept under regular review to ensure they capture the development and use of emerging technologies and DPIAs should be carried out prior to any development but then revised as it progresses from trial to deployment.

ICO guidance sets out that data protection by design starts at the initial phase of any system, service, product, or process. This means that policing bodies must, prior to implementing any intended processing activities, consider the risks that these may pose to individuals, and the possible measures available to ensure compliance with the data protection principles and protect individual rights.

Data protection by design and default:

Under Art 25 UK GDPR and Part 3 of DPA (Section 57 (Link to legislation.gov)), controllers have a general obligation to implement appropriate technical and organisational measures to show that they have considered and integrated - i.e. 'baked in' - the principles of data protection into processing activities, from the design stage right through the lifecycle. ICO guidance sets out that data protection by design starts at the initial phase of any system, service, product, or process. This means that policing bodies must, prior to implementing any intended processing activities, consider the risks that these may pose to individuals, and the possible measures available to ensure compliance with the data protection principles and protect individual rights. The ICO published guidance on privacy by design and default within the Guide to the UK GDPR.

Automated decision making:

As Daly et al. (2023) describe, under sections 49 and 50 Part 3 DPA 2018 and Art 22 UK GDPR, individuals have the right not to be subject to a decision that is based solely on automated processing which has a legal or similarly significant effect on them unless the conditions set out in the legislation are met. Where the processing is for law enforcement purposes to comply with Section 49 DPA 2018 the decision must be required or authorised by law and the safeguards set out Section 50 of the DPA 2018 must be met.

Under UK GDPR where Article 22 is engaged the processing must be necessary to fulfil a contract, be authorised by law and subject to the safeguards set out in domestic law or based on the individual's explicit consent. In both regimes individuals retain the right to obtain human intervention; express their point of view; obtain an explanation of the decision and challenge it (the obligation is to inform the data subject in writing that they have been subject to a decision based solely on automated processing). As Connon et al. (2023) argue and as the ICO notes in its guidance on AI and data protection (ICO website definition) artificial intelligence processes pose challenges with accountability and transparency of operations and therefore it will be necessary to ensure that appropriate procedures are in place to ensure consideration is given to whether there is ambiguity in the quality, reliability, or transparency in how data is being processed by automated means, that controllers are able to identify when an automated decision is taking place, that data protection legislation can be complied with and that individuals are able to access their rights under Section 49 and 50 DPA 2018 and Article 22 UK GDPR .

Artificial intelligence guidance:

The ICO has developed best practice guidance on AI and data protection (ICO website) for controllers to take into account in processing of personal information that involves AI. It sets out the interpretation of data protection law as it applies to AI systems that process personal data and contains advice on how to interpret relevant law as applies to AI and provides recommendations on organisational and technical measures to mitigate risks. Where a controller is planning to use AI and undertaking data analytics the ICO recommends its Toolkit for organisations considering using data analytics at the outset to recognise risks to rights and freedoms.

Biometrics:

The Scottish Biometrics Commissioner Act 2020 defines, for criminal justice and policing purposes in Scotland, biometrics data and sets up an independent public body for promoting and supporting the legal, ethical and effective acquisition, retention, use and destruction of biometric data.

It should be noted that the definition of biometric data in the SBC Act differs from the definition of biometric data in UK data protection law and includes for example photographs[2]. The use of biometrics is supplemented by a number of other legal frameworks (Daly et al., 2023: 25). The Scottish Biometrics Code of Practice came into force on the 16 November 2022. The Code of Practice provides a high-level summary of 12 General Guiding Principles and Ethical Considerations: lawful authority and legal basis; necessity; proportionality; enhance public safety and public good; ethical behaviour; respect for the human-rights of individuals and groups; justice and accountability; encourage scientific and technological advancement; protection of children, young people and vulnerable adults; promoting privacy enhancing technology; promote equality; retention periods authorised by law.

The law of evidence:

Improperly obtained evidence:

As detailed by Connon et al. (2023) evidence may be obtained in a number of ways including search of persons, premises, personal property, taking of samples and use of surveillance technologies. For evidence to be considered 'legally obtained' it must comply with the rules of evidence and if it does not do so it is considered to have been obtained improperly. If obtained improperly its admissibility can be questioned, and the common law rule on admissibility of improperly obtained evidence is a balancing exercise between the interest of the citizen to be protected from illegal invasion of liberties and the interest of the state to secure evidence bearing on the commission of crime and necessary to enable justice to be done.

It has been recognised that such evidence should not be withheld from court on any merely formal or technical ground.[3] Where information is improperly obtained, in addition to admissibility, the most likely grounds of challenge are Article 5 (Right to Liberty and Security), Article 6 (Right to a Fair Trial), and Article 8 (Right to Private and Family Life) of the European Convention of Human Rights.[4]

Beyond common law principles, there are statutory forms of regulation that impact on whether or not intelligence or evidence has been legally obtained: Criminal Procedure (Sc) Act 1995, Regulation of Investigatory Powers (Scotland) Act 2000, Police Act 1997, Investigatory Powers Act 2016 as well as compliance with the National Assessment Framework for Biometric Data Outcomes and the Scottish Biometric Commissioners' Code of Conduct.[5] The use of emerging technologies is likely to challenge the boundaries of these legislative measures.[6] Examples are provided in Connon et al. (2023) but it is noted that the SBC Code of Practice is likely to address these ambiguities surrounding biometric data at least.

Following some controversy, the Police, Crime, Sentencing & Courts Act 2022 introduced a system of regulation specifically focused on authorisation of the extraction of information from electronic devices[7]. As Connon et al. (2023: 62) outline in more detail these provisions should offer clarity on the process to be followed, the limitations on the extraction of information, the purposes and restrictions on the scope of these purposes and must adhere to a forthcoming code of practice.

Disclosure of evidence:

Part 6 of the Criminal Justice & Licensing (Sc) Act 2010 set out the rules of disclosure which mean that an investigating agency must provide all information relevant to a case for or against an accused that was obtained in the course of investigating. A failure to disclose information to the defence at an early stage could result in a case being challenged on the basis of prejudicial effect of the information not being made available. This is likely to present a problem as automated decision-making systems, AI and algorithms become more embedded in policing practice as the transparency of such systems is problematic.[8]

At an international level, Connon et al. (2023) outline measures being developed that seek to facilitate the disclosure of electronic evidence, e.g. a Protocol to the Cybercrime Convention[9], which seeks to enhance cooperation between states to ensure offences recognised by the cybercrime convention can be effectively investigated and prosecuted. Although the UK is not yet a signatory, the framework facilitates lawful basis for exchange of evidence which is likely to impact transparency, accountability and trust in the police.

Impacts

Impact on rights and freedoms:

As outlined in Daly et al. (2023) Human rights impacts will depend on the type of technology used, the use case, and must be examined in context and take account of impacts at multiple levels (individual, community or society-wide). In some cases it is impossible to anticipate the full impact of police use of technology on human rights and harm may be difficult to quantify, particular as it may continue into the future (e.g. if personal data is shared or sold). It is important to ensure that any private actors involved comply with applicable laws and respect human rights. Depending on whether they are controllers or processors they will have different obligations under data protection law and different governance is required to ensure compliance. Some concerns include the amplification of discrimination resulting from digital technologies (e.g. AI and machine learning systems) which are dependent on historic data which may be incomplete or contain bias. However, with regulation and governance of the design and development of new technologies (particularly AI) they may be used to advance, rather than put at risk, equality and human rights.

Impacts may occur at a societal level, e.g. risks to democratic freedoms (impacting articles 9-11 of ECHR) can arise from widespread use of surveillance tools and AI-enabled technologies by police. For example, there is an increased use of digital surveillance tools in the context of peaceful assembly and freedom of expression under the auspices of national security or public order. This type of interference with our democratic freedoms should only be permitted if it is lawful, proportionate and necessary on a targeted basis where reasonable suspicion can be demonstrated and any deployment must be in compliance with data protection law. The proportionality principle requires any surveillance measure used should be the least invasive option. UK surveillance laws applicable to certain bulk surveillance practices, must respect these principles.

Facial recognition technologies employed at large events therefore raise human rights (proportionality) concerns as well as being potentially discriminatory. The issues have been considered in the UK context in the Bridges/South Wales Police case and the ICO in its opinions on live facial recognition technology by law enforcement in public places (ICO website) in 2019 and 2021 the use of live facial recognition technology in public places. (ICO website) Risks also include discrimination, particularly to African descendants and other minorities, women and persons with disability, given literature on algorithmic error rate in facial recognition technologies. Although live facial recognition is not currently used by Police Scotland, non-live versions may still exhibit discriminatory biases. Important steps are being taken to clarify legal frameworks governing biometric data through the Scottish Biometrics Commission's Code of Practice covering acquisition, retention, use and destruction of biometric data for criminal justice and police purposes. However, given the issues identified and growing unease with live facial recognition internationally, including outright bans on police use, we would need to see strong justifications for its use in order to establish 'benefits' and for an array of concerns to be adequately addressed.

Databases: As Connon et al. (2023) point out, in addition to many concerns about negative impacts on human rights, it is possible that technologies may support human rights protections (e.g access to real time translation would support the right to a fair trial by providing information promptly on the nature of the accusation). Connon et al. (2023) provide some examples of legal challenges to the deployment of new technologies, often framed in terms of the Article 8 right to privacy.[10] For example, poor governance of the use of databases has been challenged on the bases that they breach data protection law and the inclusion of personal data on them is an infringement of Article 8 ECHR. For detail of these challenges see Connon et al. (2023: 71), which relate to whether data should be retained, for how long and at what point it should be deleted.

Connon et al. (2023) note that biometric identification systems are likely to be operationalised through the use of a database of some kind. They point out that although failure to comply with the SBC CoP will not in itself give rise to grounds for legal action, compliance with the code must be taken into account in deciding whether evidence has been improperly obtained. Furthermore, data protection law must be complied with and non-compliance will face regulatory action from the ICO. They also note that the collection and use of biometric data in the form of facial recognition has faced significant judicial attention via the Bridges/South Wales Police case in the English and Welsh Courts (for details see Connon et al. 2023, Appendix 3 and section 3.5.2). There were criticisms of the governance framework used by the police force, the data protection impact assessment had failed to grasp risk to human rights and freedoms of data subjects and had not taken steps to evaluate its potential discriminatory impacts before, during and after the trial. The Bridges decision recognised the value of the Code of Practice issued by the Secretary of State and guidance produced by the Surveillance Camera Commissioner in 2019 on police use of AFR technology with surveillance camera technology but were critical of the generic nature of each and were concerned about a lack of specific policies regarding inclusion of individuals on watch lists or justification for selecting particular locations for the use of AFR. It is worth noting that the code of Practice of the Biometric and Surveillance Camera Commissioner relates to England and Wales working across borders will require compliance with that framework as well as those in Scotland.

The use of emerging technology for the purposes of electronic surveillance and monitoring is subject to regulatory frameworks set out in various investigatory powers acts (Regulation of Investigatory Powers Act 2000, RIP Scotland Act 2000) and data protection law (Connon et al. 2023: 75), where with failure to comply there is potential for a breach of human rights and it may impact on admissibility of evidence. The legislation distinguishes between 'directed surveillance' (covert surveillance that is not intrusive and relates to obtaining private information about a person who may or may not be the focus of an investigation) and 'intrusive surveillance' (covert surveillance that focuses on residential premises or private vehicle where the surveillance is carried out by an individual but also by means of a surveillance device). Significantly, surveillance may be considered intrusive if remote surveillance technology can achieve sufficiently reliable quality of data. Directed surveillance can be authorised on the grounds it is necessary to prevent or detect crime or prevent disorder, in the interests of public safety or protection of public health, whilst intrusive surveillance should only be authorised where it is considered necessary to prevent or detect serious crime. Importantly, if a new technology is developed that would obtain the same data as an already available means the question would be raised whether it is needed at all because in assessing whether intrusive surveillance is necessary and proportionate consideration must be given as to whether the same information could be obtained by other means.

In relation to the regulation of automated decision making, Connon et al. (2023) point out that where it is necessary and proportionate for the prevention, investigation and prosecution of criminal offences there is an exception to the provision that an individual should not be the subject of a solely automated decision-making process (see sections 49 and 50 of DPA 2018 and Article 22 of UK GDPR). The 2017 Council of Europe study on the human rights implications of automated data processing techniques[11] highlighted that in addition to design flaws in algorithms datasets may contain bias that is replicated or magnified by the algorithm (Završnik, 2021) and there are further challenges with human interpretation of the algorithm. There are clear concerns around automated decision making and AI (Binns, 2022), including limited ability to achieve transparency in how data is processed given opacity of algorithms).

Connon et al. (2023) highlight that the UK Office for AI have issued guidelines for the procurement of AI in government (2020) and an ethics, transparency and accountability framework (2021) which includes an algorithmic transparency template but these are not legally binding (in stark contrast to the Canadian system). The Justice and Home Affairs committee of the House of Lords have raised concerns that there is no central register of AI technologies in the UK, which is problematic for transparency and accountability and they argue there should be a 'duty of candour' on police to ensure transparency in the use of AI enabled technologies. Algorithms have the potential to exacerbate and escalate biases and JHAC were concerned there were not scientific or ethical standards an AI tool should meet before it can be used in the criminal justice sphere. It is noted that the Alan Turning Institute is now leading a project seeking to draft global technical standards.[12] International developments to enhance ethical approaches to the use of algorithms include OECD Principles of Artificial Intelligence.[13] G20 AI Principles and UNESCO adopted a Recommendation on the Ethics of AI.[14] Finally, the Council of Europe are in the process of developing a convention on the use of AI that is due to be completed in 2023.[15][16]

Data protection and equality impact assessments:

Police Scotland report that since 2018 they have worked to make the use of DPIA's systemic for all new or updated processing, including the introduction of new technologies. They emphasise that their impact assessment procedures are aligned, e.g. DPIA and Equality and Human Rights Impact Assessments (EQHRIA) are in concert with each other. The two frameworks are used to guide the design, build and implementation of technologies and processing by working with a range of specialists to discuss risks and identify solutions. Examples have been provided of where changes have been made in order to ensure compliance, for more details see Daly et al. (2023: 29).

It has been suggested that Police Scotland may wish to consider undertaking Children's Rights and Wellbeing Impact Assessments (CRWIAs) alongside DPIAs and EQHRIAs as a way of further embedding a human rights based approach. In relation to equality impacts, it is noted that technologies based on predictive analytics which leverage data and other technologies to monitor and assess individuals, communities and or specific locations can target particular protected characteristic groups over others e.g. racial groups, younger people, disabled people, religious groups and women. Both EHRC and SHRC raised concerns in 2020 about potential discrimination caused by predictive policing. EQHRIA processes must help identify potential discrimination, consider possible impacts on people with protected characteristics (and any inequalities, barriers or specific needs) and opportunities to advance equality when designing, commissioning or using new technologies. This is reinforced by data protection law, according to which controllers must identify and assess all potential risks to rights and freedoms including the risk of discrimination and identify and implement measures to mitigate and manage these risks (section 65 DPA 2018 and Article 35, UK GDPR)

Processes for establishing legal basis:

As detailed in Daly et al. (2023) Police Scotland have explained that they primarily use the established frameworks of a DPIA and EQHRIA to establish, define and document the legal basis it relies on for the use of new technology. Lawfulness is at the heart of the DPIA, addressing the legislative requirement that processing must be both lawful and fair. Police Scotland assert that the DPIA and EQHRIA frameworks allow them to design the necessary legislative and regulatory compliance into the new technology. An initial indicative legal basis is tested and developed given input from a number of internal and external actors and they distinguish between the purpose of processing and the manner of processing (with the latter less clear at the outset). Although the Police and Fire Reform Scotland Act forms a backbone to many police activities the legislative provisions that may be elide upon as legal bases are many and lengthy. Learning from Cyber Kiosks (also known as digital triage devices) has been drawn on and in recent examples such use of BWV by armed officers Police Scotland developed a Code of Practice (Police Scotland website link) and the EqHRIA and DPIAs are treated as live documents - see Daly (2023: 33) for more information.

Wider views beyond Police Scotland consider that there is a lack of clarity or even insufficient legislation in Scotland to facilitate and justify police use of emerging technologies for certain purposes. For example, there was significant controversy and disagreement among stakeholders about whether there was an appropriate basis for Police Scotland to use Cyber Kiosks. Whilst Police Scotland claimed the legal basis exists, others such as SHRC were of the view that there is an insufficiently clear legal basis for this. Although Police Scotland do specify the legal basis in DPIAs, given the potential for differing interpretations, legal basis (and opinions being drawn on) should be shared with key stakeholders as a matter of course in order that they may be questioned and tested, and this must be reviewed in light of further developments (such as change in use case or additional information coming to light).

It is noted that further controversies may arise given the lack of clear and explicit legal framework and policy guidance for other technologies such as: facial recognition, drones, body worn cameras, data driven analysis, AI systems and the use of personal data collected and processed by these technologies. The ICO has produced guidance on a number of these issues (live facial recognition use by law enforcement (ICO website) and more generally in public places (ICO website), the use of video surveillance (ICO website), and on AI and data protection (ICO website)). and the Ryder Review (Independent Review of the governance of biometric data in England and Wales) recommended a legally binding code of practice for live facial recognition should be formulated.

Data sharing by police and other agencies also gives rise to concern and may negatively impact on human rights. Although there is insufficient knowledge of the extent of data sharing in Scotland, there have been reports of disabled people being photographed by English police forces at an Extinction Rebellion protest and their details passed to the Department of Work and Pensions. However, human rights standards prohibit collection of personal data to intimidate participants in a protest. As with any processing of personal data, data can only be shared lawfully by the Police if there is a clear basis in law and the sharing would not result in an infringement of any other law (including Human Rights law).

Given the police's role in investigating allegations of criminal behaviour there are a number of activities with technological implications such as carrying out searches, undertaking surveillance (e.g. collecting facial images), interrogating suspects and witnesses, and generally securing evidence (e.g. collecting DNA and fingerprints) – triggering the application of Articles 5, 6 and 8 of the ECHR, and which may result in unlawful discrimination under the EA 2010. National and international courts have found violation of human rights and data protection in the blanket retention of biometric data: DNA profiles (cellular samples and fingerprints and custody photographs) and bulk surveillance of the public. The SBC Code of Practice should be referred to.

The recent investigation by the House of Lords Justice and Home Affairs Committee into how advanced technologies are used in the justice system in England and Wales highlighted the proliferation of AI tools by police forces without proper oversight. It acknowledged the opportunity of AI to help prevent crime but stressed the risk to exacerbating discrimination and highlighted that 'without sufficient safeguards, supervision and caution, advanced technologies may have a chilling effect on a range of human rights, undermine the fairness of trials, weaken the rule of law, further exacerbate existing inequalities and fail to produce the promised effectiveness and efficiency gains.'

The 2022 UK Government policy paper, Establishing a pro-innovation approach to regulating AI, however considers that any regulatory activity should be directed towards AI presenting 'real, identifiable, unacceptable levels of risk', but for now does not consider legislation to be necessary; instead it plans to introduce a set of non-statutory cross-sectoral principles on AI. The Scottish Government launched its own AI Strategy in 2021, in which it set out its vision to become 'a leader in the development and use of trustworthy, ethical and inclusive AI' but there is no mention of police use of AI in the paper. The SG has also set out its vision to be an 'Ethical Digital Nation' and behave in ways which generate trust among the public in the use of data and technology, but again policing is not mentioned in this strategy.

The response to the IAG public consultation (call for evidence) emphasised that the legislative framework in which a technology is operating must be well-defined and have exact parameters before technology is introduced. A strong legal assessment framework would likely mitigate legal, jurisdictional and operational challenges from transpiring and reduce risks to public trust, feelings of oppression and surveillance, and discrimination. The need to engage critical assessment or external consultation was also raised. It was suggested that an ethical and legal assessment framework should embrace an equality and human rights-based approach to understand impacts on individuals (including witnesses, victims, suspects, member of the public and protected characteristic groups) and provide strong and unbiased evidence that the proposed technology is non-discriminatory and will not entrench existing inequalities and explain why it is necessary ad proportionate, largely reflecting and reinforcing existing requirements under legislation including data protection.

Procedures and digital evidence gathering:

In considering policing procedures and digital evidence gathering some commentators consider that there are gaps in the case law of Scottish and English courts in dealing with the expanded scale and scope of interference with Article 8 of the ECHR (respect for private and family life, hope and correspondence). Smartphone devices which may be examined by police are incomparable to paper documents or more basic computers given they store, transmit, communicate large amounts of data, some of which is jointly owned or belonging to others and can be obtained without their consent. The information found on a mobile device may provide profound insights into an individual's behaviour, beliefs, and emotional state. Evidence extracted from a digital device may be critical to criminal investigations but a device should only be reviewed, and information extracted, where it represents a reasonable line of enquiry. There is also an issue with the use of technology to extract information, including bypassing security protocols and a robust approach to regulation and scrutiny is required.

In 2021 the ICO published the findings of an investigation into the use of Mobile Phone Extraction in Scotland. Six recommendations for Police Scotland and one recommendation for COPFS and SPA were made. These included recommendations on establishing controllership, DPIAs, transparency, retention, adherence to forensic standards and working with UK partners to implement the new powers under the Police Crime, Sentencing and Courts Act 2022 (see sections 37-33) and recently published Code of Practice which applies in Scotland.

The Bridges case, currently example of English case law and not binding in Scotland, on facial recognition emphasised that clear guidance on the use of technology and who could be targeted were issues of legality, and in the absence of such a guidance a finding that interferences was in accordance with the law was not sound. Scotland is set to become a forerunner in the regulation of biometric data use by police as the SBC draft Code of Practice will become the first of its kind and Scotland will become the first UK country to have detailed legislation, and a statutory Code of Practice on the acquisition, retention, use and destruction of biometric data for criminal justice and police purposes. This is a positive step and its implementation and evaluation should inform how procedures and evidence gathering (even beyond biometric data) can be improved to reflect best practice in human rights, equalities and data protection.

It should also be noted that a fully digitised justice system is a key objective outlined in the Digital Strategy for Justice in Scotland and this includes a focus on providing digital recording of evidence, reports and a secure digital platform to secure all information relevant to a case.

Drones and evidence case study:

Drones (RPAS) are capable of viewing people from vantage points in which there might otherwise be an expectation of privacy, at distances where there may be limited appreciation that drones are in operation (with infrared or low light capability, potentially using ANPR or facial recognition technologies). Legal issues will vary depending on the deployment context. There is a need for robust impact assessments and the use of drones is subject to a number of legal requirements, including compliance with human rights, equalities and data protection requirements and Civil Aviation authority regulations. As drones will likely capture sensitive personal data there is a requirement to demonstrate that no less intrusive means are suitable. For drones the risk of 'collateral intrusion' is likely to be more extensive than for other means so demonstrating this necessity is important in an impact assessment. Deployment at public protests would require detailed justification as political belief constitutes sensitive personal data.

There is detailed guidance from the ICO on the use of drones under UKGDPR (although this does not cover Part 3 law enforcement processing) and measures required may include the prohibition of continuous recording, restriction of recording at lower altitudes, restricted field of vision or other means. A particular challenge is the requirement to provide notification of drone operation in an area. Privacy by design (e.g. encrypting any locally stored data) is required. The use of drones has not seen significant challenge in courts in Scotland but the Bridges case is of relevance. Internationally the use of drones is often considered under prior legal frameworks around police helicopter surveillance for example but a number of states in the US have prohibited the use of drones for police or other surveillance on constitutional grounds.

Lessons learned, good practices & legislative gaps:

Daly et al. (2023: 48-60) cover four case studies highlighting lessons learned from a Scottish perspective relating to: cyber kiosks, mobile working, BWV, and drones. Although much of this is covered in the next chapter, some of the implications from a legal standpoint are briefly outlined here. As previously alluded to the legal basis of Police Scotland's use of Digital Triage Devices had been called into question. Open Rights Group, Privacy International and the Scottish Human Rights Commission were among the stakeholders who believed there was a lack of a clear legal basis for their use. The Justice Sub-Committee on Policing Report (2021)[17] emphasised the need to undertake necessary assessments, confirm the legal basis and to consult relevant stakeholders prior to making a decision.

When it comes to legal considerations regarding emerging technologies such as drones it is clear that the policing context in which drones are deployed (e.g. missing persons versus surveillance of a large public event) impacts on the extent to which legal issues such as right to privacy would be engaged (see previous section). Daly et al. (2023) cite the deployment of BWVdevices to armed police officers in Scotland prior the COP26 conference in Glasgow in 2021 as an example of improvements made to mitigate against potential privacy and third-party concerns. Police Scotland completed full EqHRIA and DPIAs (these are treated as live documents to be reviewed and updated annually to reflect changes in legislation, policy and technology) and developed and published a detailed Code of Practice outlining how BWV is to be used in armed policing.

Daly et al. (2023: 60-68) also draws out some insights from other jurisdictions. For example, equivalence has been drawn between Article 8 ECHR and section 8 of the Canadian charter of rights and freedoms. The Canadian Supreme Court has a growing body of jurisprudence distinguishing between traditional searches and searches of devices and cyber space. The case of Fearon determined that the search of a mobile phone was not an inevitable breach of privacy. This highlights that privacy safeguards or modifications (e.g. tailoring the nature and extent of the search around purpose it is being lawfully conducted for) can be used to preserve human rights of the target of the search. As Daly et al. (2023) point out the officer deciding to search must be capable of compressively appreciating the human rights being engaged at the time and therefore officers would require precise and detailed legal training in order to balance the necessity of performing a search against the rights of the individual in a proportionate manner.

There is some controversy about the clarity and ambiguity (or lack thereof) in the interpretation of the legal opinion obtained by Police Scotland on the legal basis of Cyber Kiosks which cites the Canadian cases. Daly et al. (2023) argue that it was concerning that Police Scotland's assertion that the opinion was clear and unambiguous was a narrow interpretation of it and obscured the context of the advice. Indeed, the author made several recommendations and identified legislation and a code of practice as best practice. Whilst Canadian jurisprudence has acknowledged e.g. that the profiling of suspects on the basis of their ethnicity is unlawful, a comprehensive framework covering all aspects of emerging technologies does not yet exist.

Daly et al. (2023) highlight that the Privacy Commissioner of New Zealand consider their legal framework to be adequate to address the field of biometric deployments (including facial recognition for identification purposes), though they are considering whether their Privacy Act 2020 should be supplemented by a code of practice. Crucially, given the interdependent nature of technological advance, it is worth noting that the Privacy Act applies to both public and private bodies like data protection law in the UK. Also of interest is that NZ immigration legislation limits the use of AI in decision making, requiring the prescription of personal responsibility to decisions. Many NZ government agencies have voluntarily subscribed to an Algorithm Charter that provides a legal framework for Artificial intelligence related products and services.

Daly et al. (2023) also highlight international law and norms, including at an advisory level respect for fundamental rights derived from the ratification of UN Treaties and Committees. Key principles can be distilled from International l29aw e.g. Article 17 ICCPR. The Council of Europe and its delegate bodies also have detailed legal frameworks, some of which are biding in law. Finally, international non-governmental organisations such as Amnesty International have published guidance relevant to policing. Other international standards that should be given due consideration include the jurisprudence and general comments from human rights bodies to which the UK is a member (e.g. UN guiding principles on use of personal and non-personal information, on business and human rights). As Daly et al. (2023) argue there is a legitimate expectation that private actors (both controllers and processors e.g. those developing technologies which may be used by police) should comply with all applicable laws and respect human rights and data protection law. As Daly et al. (2023) point out, the European Parliament supported the European Commission's call for a five-year ban on the police use of facial recognition and predictive policing algorithms, as part of an international concern over levels of surveillance by states and private actors which the UN considers to be incompatible with fundamental rights (e.g. where individuals have gathered to protest the use of facial recognition can serve to intimidate and deter people from protesting).

Connon et al.'s (2023) analysis of the existing legal frameworks outlines a number of insights and recommendations that need to be considered for the adoption of emerging technologies in policing. As they point out there are a number of lessons to be learned through the examination of the Information Commissioner's enforcement action, as well as the common law. The ICO's (2021) investigative report into mobile phone data extraction by police in Scotland is of particular note. Concerns around cyber kiosks included lawful basis for processing and the transparency of information provided to the public and although the ICO acknowledged there had been progress they made a number of recommendations, (e.g. ensuring DPIAs are in place and are reviewed and updated and consulting with the ICO on any proposed high-risk processing of data) see ICO 2021 (cited in Connon et al. 2023: 110). Connon et al. (2023: 111) also mention the ICO's reprimand to the Scottish Government and NHS National Services Scotland in relation to the NHS Scotland Covid Status App.

In relation to both cases Connon et al. (2023) draw out a number of lessons learned including the critical importance of:

1) mapping the relationship between those involved in the development and implementation of emerging technologies (particularly significant when data is being shared between organisations including private to public sector) in order to determine roles and responsibilities in the protection of personal information.

2) understanding the nature of the data being processed and the scope of processing in order to be clear on lawful basis of processing, the need for consent and the information that needs to be provided to the data subject (e.g. there would be an issue if data collected on lawful basis of being necessary for prevention, investigation or detection of criminal offence if it was used to train a commercial algorithm as consent should then be the lawful basis).

3) A comprehensive review of the above considerations must be undertaken before deployment of technologies.

Beyond the UK, although the UK is no longer bound by the EU Charter of Fundamental Rights as Connon et al. (2023) argue that in contexts with high potential for a cross boarder dimension compliance with the EU interpretation of Article 8 (Protection of Personal Data) should be taken into account when considering deployment of technologies that are dependent on the processing of personal data. For details of the implications of case law in determining compliance with data protection law and Article 8 in relation to electronic databases, biometric identification systems and surveillance and tracking systems, please see Connon et al. (2023:113-128). In summary, electronic databases should have policies in place that offer clarity on the circumstances in which data will be retained and the purposes for which it is used.

In relation to biometric identification systems the conclusions of the court in relation to the Bridges decision contain important considerations.[18] These include the acknowledgement that the more intrusive the act the more precise and specific the law must be to justify it and public authorities' duty to take steps to make enquires about the potential impact of AFR (across protected characteristics) to satisfy equality duty before during and after a trial, and that assessment of impacts should include a mechanism of independent verification. Furthermore, critical issues regarding the interaction between private entities and law enforcement in the development and use of biometric systems were brought into sharp focus by the role of Clearview AI's facial recognition tool, the use of which has been challenged in several jurisdictions (see Connon et al. 2023: 117) e.g. on the basis of failure to obtain consent and process information fairly with lawful reason or meet data protection standards. The critical issue was the lack of lawful basis but in the Canadian investigation for example the potential discriminatory impact of facial recognition technologies was emphasised.[19] The UK ICO investigation also highlighted the issue of commercial advantage and issued and enforcement notice (including a requirement to delete any personal data of subjects residing in the UK from the Clearview Database) and monetary fine of £7,552,800.[20] Daly et al. (2023) highlight that no specific legal framework exists for facial recognition in policing in England and Wales (with existing regulation focusing on fingerprints and DNA evidence) and argue that the Bridges case demonstrates the need for more legal and policy activity to regulate facial recognition.

As Connon et al. (2023) argue policing organisations need to consider how emerging technologies are positioned within regulatory regimes applying to private and public actors. The Council of Europe Guidelines on addressing human rights impacts of algorithmic systems[21] make clear that the impact on human rights must be considered at every stage. They also produced specific guidelines on the use of facial recognition[22] which state that necessity has to be assessed together with the proportionality to the purpose and impact on the rights of the data subjects. Crucially, they highlight that the legal framework should be in place addressing each type of use and providing a detailed explanation of the specific use and purpose; the minimum reliability and accuracy of the algorithm used; the retention duration; the possibility of auditing these criteria; the traceability of the process and safeguards.

Although the SBC code of practice will address the acquisition, retention, use and destruction of biometric data for policing purposes, and data protection law must also be complied with for all processing of personal data, Connon et al. (2023) argue that there is more that can be done to provide a supportive framework for the use of emerging technologies including biometric identification systems. Connon et al. (2023) highlight good practice from New Zealand Police which will be covered in chapter 5 but a couple of implications for legal frameworks are briefly highlighted here. Work by Lynch et al. (2020 and Lynch and Chen (2021) highlighted that the more sensitive the information being processed the greater need for specific legal structures to authorise processing and ensure necessary reliability, transparency, and accountability.

In relation to surveillance and tracking devices, Connon et al. (2023) highlight the Canadian Directive requirement that regulated entities undertake an algorithmic impact assessment[23] (using an open source tool) prior to adopting systems dependent on them. Connon et al. (2023) argue that if a compulsory algorithmic impact assessment was being considered it would need to be tailored to the policing context (because algorithms have potential to mask, exacerbate and escalate human biases and there are currently no minimum scientific or ethical standards an AI tool must meet before it can be used in the criminal justice system). In summary, Connon et al. (2023) identified specific legal concerns in relation to automated decision making and the use of AI.

Turning to AI and the European Union, Daly et al. (2023) point out that although the UK, and therefore Scotland, is no longer an EU Member State or subject to EU law, developments in the EU are of interest from both a comparative and trading perspective. The proposed AI Act is a domain-neutral proposal that cuts across sectors and the private-public divide and so it originally was intended to apply to police and other law enforcement actors in the EU. The Act covers development, placement on market and use of AI systems (though there is less of a focus on use). They Act uses a risk-based approach that creates four categories, with a scale of legal constraints. It distinguishes between systems that pose:

a. an unacceptable risk and are therefore generally prohibited (though law enforcement enjoys a number of exceptions);

b. high risk systems that are permitted but more heavily regulated;

c. limited risk systems to which some regulation applies;

d. minimal risks systems that are not regulated, though the development of an adherence to codes of practice and similar frameworks is encouraged.

The use of AI by police potentially cuts across all four categories, though it is explicitly referred to under the rules pertaining to a) and b). Given the broad definition of AI which includes statistical analysis software and some software routinely used by law enforcement agencies for some time without raising particular concerns (or at least not concerns framed in the language of trustworthy AI) could fall under the high-risk category (e.g. automated number plate recognition or forensic DNA matching). It is worth considering some of the implications of the EU's AI Act for a post-Brexit UK and Scotland outlined by Daly et al. (see 2023: 66-67) including: a) direct legal implications (from extraterritorial scope); b) pragmatic implications (de facto regulatory pressure for UK businesses and law enforcement); c) whether the Act provides a good blueprint for Scotland.

a) Legal implications: the Act has some extraterritorial reach in providing safeguards for residents within the EU against use of their data by providers of AI services located abroad (including Scotland). For example, in principle even if data of EU citizens had been transferred lawfully for processing to a third country outside the EU under EU data protection law the processing of the data may fall foul of the additional requirements of the AI Act creates when the processing involves automated analysis and decision-making using AI. This has implications for UK businesses providing AI services that also involve residents of the EU, and it could potentially affect cross-border police cooperation and data sharing.

b) Pragmatic implications: Daly et al. (2023) argue that the EU's aspirations are that the AI Act will become a global standard and the proposal is already having some international impact (e.g. in Brazil), and the US is also stepping up its efforts to regulate development and use of AI systems.

c) The AI Act as a regulatory blueprint within the UK including Scotland? Daly et al. (2023) note that the Act aims to minimise trade barriers for AI products and services within the EU Single Market which means that it pre-empts the ability of member states to regulate in response to local conditions. If a similar Act were to be adopted by the UK legislature, then similar issues for the ability of the Scottish Government to regulate AI in policing. The EU Act is domain independent but presumably a Scottish AI Act could only regulate those uses of AI that are devolved matters so a more domain specific approach would be preferable. However, the current UK –wide approach to AI regulation is to issue a set of non-statutory cross-sectoral principles on AI. Daly et al. (2023) argue that it would be advisable for a binding code of practice to be adopted for AI uses by police in Scotland given concerns which have arisen with previous technologies by police in the absence of such a code.

In looking towards a fair future Daly et al. (2023) conclude that whilst Police Scotland are now more mindful of the ways in which Article 6 ECHR (right to a fair trial) is engaged in carrying out their functions there has been limited analysis of how policing activities with digital technologies may engage ECHR Article 8 (right to private life) or have specific impacts for the protected groups in EA 2010. In addition they point out that there are distinct and emerging risks that widespread use of surveillance tools and AI-enabled technologies may undermine citizens' digital right and hider their willingness to meaningfully participate in democratic processes.

Therefore, Daly et al. (2023) argue that Scottish Minsters, Police Scotland and other relevant decision makers must adopt and implement an approach that embeds and mainstreams equality and human rights into the use of emerging technologies in policing. This includes incorporating equality and human rights legal frameworks and principles into new legislation, codes of practice and guidance. For example, a position that the issue of legal basis for cyber kiosks should be settled by litigation is not consistent with a best practice philosophy, transparency or accountability.

A number of recent engagements in Scotland on emerging technologies and policing are mentioned by Daly et al. (2023) with the assertion being that a human rights based should be more front and centre e.g. in the in the SPA working group on options for the future delivery, accreditation, oversight and governance of digital forensics in Scotland. The Biometrics IAG did provide a human rights analysis (incorporating the PANEL principles) and develop a draft Code of Practice.

Chapter 4 summary and conclusion

Further to policing legislation, key domestic and international legislation and relevant case law relating to emerging technologies in policing including Human Rights, Equalities, Data Protection, Biometrics and Law of Evidence was reviewed. Impacts on rights and freedoms include privacy, discrimination etc. and may also occur at a societal level, but interference with democratic freedoms should only be permitted if it is lawful, proportionate and necessary. On the other hand, with regulation and governance of design and development, emerging technologies such as AI may be used to advance, rather than put at risk, equality and human rights (e.g. real time translation could support right to a fair trial).

However, specific concerns exist in relation to automated decision making and the use of AI in policing surrounding data protection, transparency and potential to exacerbate bias and discrimination. Although guidelines and an accountability framework for AI exist at UK level these are not legally binding. Live Facial Recognition raises human rights (proportionality concerns) as well as being potentially discriminatory. Indeed, The Ryder Review recommended a legally binding code of practice for live facial recognition should be formulated.

In conclusion, Daly et al. (2023) argue that there would be an advantage to Police Scotland in considering that Article 8 is engaged wherever their technology is used to collect data (including general or targeted surveillance) that could on its own, or in conjunction with other data, identify people or personal characteristics. Police Scotland need to ensure and enhance its approach to data protection compliance to ensure data protection by design and default and to ensure all risks to the rights and freedoms of individuals are identified, assessed and mitigated in DPIAs. In addition, Daly et al. (2023) argue that in line with Scotland's National Action Plan for Human Rights, Police Scotland should further embed a human-rights based approach within the structures and culture of policing, including strengthening human rights training and accountability e.g. evaluating the use of emerging technology or change of use of existing technology by using the PANEL tool. SHRC recommends that Policing should further embed human rights standards within five broad areas: policy and strategic decision making; operational planning and deployment; training and guidance; use and control; and investigation, monitoring and scrutiny. As part of demonstrating due regard to the needs of the PSED, equality should also be embedded in these areas. Daly et al. (2023) argue that by embedding human rights-based decision-making Police Scotland will have less need to engage in corrective action in response to external pressure. Their approach could be enhanced by internalising human rights knowledge and capacity e.g. by employing equality and human rights experts to assist in policy design, delivering training and supporting officers in order to mainstream knowledge.

A number of key considerations relating to legal frameworks for emerging technologies in policing in Scotland are outlined here but may be found in full in Appendix C.

4.1The continued implementation and reinforcement of a human rights-based approach to policing in Scotland is key.

4.2The impactsof new technologies in policing on human rights and equalities need to be further considered e.g. via the 6th ethics and human rights case (see ch8) and risks must continue to be assessed and mitigated throughout the lifecycle.

4.3 Legal basisfor using policing powers vis-à-vis technologies must be clearly specified and shared with key stakeholders.

4.4 Whilst significant legislative gaps were not found, Scottish Government (and where appropriate the SBC) should seek to keep the legislative landscape under review and consider whether future technological deployments (such as Live Facial Recognition and certain applications of AI, e.g. in predictive policing) would benefit from the introduction of statutory codes of practice in order to provide greater clarity and safeguards. The possibility that certain applications of some technologies in policing should be categorically prohibited from use, either because they are unacceptably risky even with mitigation in place, or because they are intrinsically incompatible with human rights, should be considered in this context by Government.

4.5 Policing bodies must have special regard to the interests of children and vulnerable persons and how the technologies may impact upon them.

4.6 The use of new technologies should not unlawfully or unjustly adversely impact an individual or group of individuals and the processing should be within the reasonable expectations of the public.

4.7 Police Scotland shouldseek to publish Operational Practice Codes as soon as possible prior to implementation of technologies, and proactive communications on use and effectiveness post-implementation. The necessity of drone deploymentrather than other means of investigation must be considered given the likelihood they will capture sensitive personal data and have a high risk of collateral intrusion.

4.8 Attention must be paid to personal data generated by technologies used by policing bodies, ensuring mission creep is managed and data is processed for specific, explicit and legitimate purposes. New systems must be in compliance with data protection law and additional safeguards should apply where processing relates to children and vulnerable people.

4.9 Data flows must be mapped and understood so the roles and obligations of multiple partners (including private vendors) under data protection law are understood prior to implementation of data sharing technologies.

Contact

Email: ryan.paterson@gov.scot

Back to top