On Wednesday 26th of June, the European Commission’s High Level Expert Group on Artificial Intelligence (HLEG) publishes its “Policy and Investment Recommendations for Trustworthy AI”. Access Now’s European Policy Manager, Fanny Hidvegi, is one of the members of the HLEG. Although the Recommendations take steps toward addressing some of the most pressing concerns about AI systems, they fall short of doing what is needed to enforce the highest standards of human rights compliance for AI that is designed, developed, and deployed in the European Union. Access Now calls for concrete, actionable policies instead of the promotion of so-called “trustworthy AI”.
This new document builds upon the “Ethics Guidelines for Trustworthy AI” published in April of this year, which introduced the concept of “Trustworthy AI” as a voluntary framework to achieve legal, ethical, and robust AI. These policy and investment recommendations aim both to boost AI uptake in the EU and to provide direction for the development and deployment of “Trustworthy AI” in Europe and beyond.
When the Ethics Guidelines were published last April, we outlined a number of criticisms and made several recommendations for the HLEG to consider to ensure that the Ethics Guidelines would become more than just an empty aspiration. Although the HLEG’s Policy and Investment Recommendations have taken up some of the most pressing concerns that we previously highlighted, they unfortunately miss the mark on a number of important issues.
“What we need now is not more AI uptake across all sectors in Europe, but rather clarity on safeguards, red lines, and enforcement mechanisms to ensure that the automated decision making systems — and AI more broadly — developed and deployed in Europe respect human rights”, said Fanny Hidvegi, Access Now’s European Policy Manager.
The main problem with the HLEG’s Policy and Investment Recommendations is that their overriding aim is to achieve maximum uptake of AI — both in the public and private sector — as soon as possible. This focus on AI uptake works to the detriment of the aim of ensuring that AI is truly trustworthy, and the HLEG has missed an opportunity to focus on the necessary precautions to ensure that the realisation of AI systems in Europe is legal, robust, ethical, and most importantly, respects human rights.
Despite this misguided focus on AI uptake, the paper does call on the European Commission to ban mass-scale citizen scoring and to develop stronger limitations on the use of biometrics and specific criteria for red lines. These aspirations are to be applauded, but the new Commission and other EU institutions need to act to ensure that they are translated into concrete, actionable policies by working together with a much more diverse set of stakeholders and underrepresented voices.
Note: We plan to publish a detailed analysis in the coming weeks.