European Commission white paper on artificial intelligence: our response

Our response to the European Commission’s white paper on artificial intelligence, including its proposals for future regulation of AI products and services in the European Union.


Risk assessment

We agree that "the determination of what is a high-risk AI application should be clear and easily understandable and applicable for all parties concerned."

To achieve this objective, a two-factor, cumulative criterion is proposed in the White Paper for defining "high-risk" applications (in essence, risky sector, plus risky intended use). While this criterion is clear and easily understandable, its simplicity might also create challenges when screening certain complex AI use cases. Using the White Paper's example (page 17), if an appointment booking system in the healthcare system failed or was erroneous (for instance, by assigning inappropriate priority to a patient's appointment), this could have life-threatening ramifications on patients who require life-saving treatment – and therefore may be justified for "high risk" intervention, even though the intended use appears "low risk" when considered in isolation. There is also the possibility that that dual/multi-use applications could first be assessed in a "low-risk" sector, and subsequently repurposed for use in a different, higher-risk context, possibly via a software update. 

The White Paper recognises that the cumulative criterion might not capture all risks and that "there may also be exceptional instances where, due to the risks at stake, the use of AI applications for certain purposes is to be considered as high-risk as such – that is, irrespective of the sector concerned". We agree that that the purposes listed in the White Paper as an illustration (recruitment and workers' rights, and surveillance) warrant particular scrutiny, but a purpose-based list of exceptions still would not capture the type of complex, context-dependent risky application illustrated in the medical appointment booking system example above.

More generally, we believe it will be challenging to differentiate in a binary fashion between high-risk and low-risk applications using a universal, clear and easily understandable set of criteria. Whilst helpful guidelines such as those proposed may serve to guide developers and regulators, we believe that risk assessment is likely to require an approach that is both finer-grained and more holistic. This will require careful examination and a degree of judgement, and considering that its outcome would have a significant impact on how an AI product is designed and brought to market, assessment might require a dialogue between developer and regulator. This would help ensure that, for instance, the developer of a low-risk application does not needlessly implement unnecessary requirements because it mistakenly assessed it as high-risk. This would also be necessary to avoid a situation whereby a developer (intentionally or unintentionally) incorrectly assesses their high-risk application as being low-risk, which under the proposed process might only be detected after consumers have been harmed. Such a dialogue would require resources from both regulator and developer, and the impact on SMEs in particular would need to be considered.

Ultimately, risk is a continuum, and it might not be necessary to find an optimal binary criterion. We believe that the mandatory requirements listed in the White Paper for high-risk applications represent good design principles in general and desirable outcomes which are relevant to any AI application. The level of risk for a given application would chiefly determine how stringently the application would need to meet those requirements.

The Commission also proposes establishing a voluntary labelling scheme for "no-high risk" AI applications – that is, not falling under the cumulative criterion or the exceptions listed in the White Paper, and hence not subject to the associated mandatory requirements. Whilst there are benefits to such an approach, we see some risks of potential consumer confusion which would need care to avoid. For example, the differentiation within "no-high risk" AI applications between those that carry the label, and those that do not might suggest to consumers that the latter category of applications could not only be of lower quality, but also potentially "less safe" – if consumers mistakenly interpret those labels as being similar to a CE marking. 

Contact

Email: ai@gov.scot

Back to top