We fully support the EU's recognition of the importance of adequate and proactive information provision about the use of AI systems. We would welcome a wider exploration of transparency and explainability and what minimum standards would be appropriate, particularly in the use cases of statutory decision-making about individuals by public authorities, and life-critical decision-making for instance in health care. This is an active area of multidisciplinary research, including the development of technological solutions to better explain the decisions of neural network algorithms, and effective engagement with customers.
Regarding robustness, we agree that consideration should be given to "requirements ensuring that AI systems are resilient against both overt attacks and more subtle attempts to manipulate data or algorithms themselves, and that mitigating measures are taken in such cases." Adversarial AI, particularly for computer vision applications, is currently an unsolved challenge with major safety implications, such as for autonomous vehicles. We believe it is urgent to start providing legal clarity on what level of robustness would be appropriate for such AI applications, while bearing in mind this is a rapidly evolving field.
More generally, it is important to recognise that meeting the requirements listed in the White Paper will require trade-offs, for instance between performance (robustness and accuracy) and explainability/transparency, or between different, mutually incompatible definitions of fairness to achieve non-discrimination. For developers to comply and make appropriate trade-offs, it is critical that they are provided with clear and practical definitions of key terms (such as AI, diversity, human oversight and autonomy) and helpful guidelines. We welcome the work of the AI High Level Expert Group work in this area and look forward to seeing its updated guidelines following industry feedback. As previously mentioned, we believe there is value in encouraging best practice and upskilling in the key areas identified as part of the requirements, whether or not an AI application is high-risk.
As discussed above for training data, assessment of conformity with the requirements will generally be a complex task. This will require regulators to develop skills in new areas and collaborate with other bodies. We therefore welcome the Commission's consideration of a "network of national authorities, as well as sectorial networks and regulatory authorities, at national and EU level" to facilitate the implementation of the legal framework.