On Monday 8 April, the European Commission’s High Level Expert Group on Artificial Intelligence (HLEG) published its “Ethics Guidelines for Trustworthy AI”. Our own Fanny Hidvegi, European Policy Manager at Access Now, is one of the members of the HLEG.
Access Now supports the values and objectives of this initiative, but we stress that it can only be a first step. In its mission to promote innovation and emerging technologies, and in the promotion of Trustworthy AI, the European Union has the responsibility to ensure respect for democratic values, human rights, and the rule of law. The concept of Trustworthy AI is introduced and defined by the Guidelines as a voluntary framework to achieve legal, ethical, and robust AI. The Guidelines then lay out a list of requirements and an assessment list to operationalise this concept.
The reflections taking place to produce the Guidelines offer a useful starting point to reflect rationally on where the EU should go with regard to AI development, but as a political initiative and policy instrument it will never be sufficient to ensure that the design and deployment of AI is individual-centric and respects human rights. To achieve that objective, Trustworthy AI must be matched by the enforcement and development of adequate safeguards.
Together with consumer groups BEUC and ANEC, we have issued a press release with our high-level thoughts on these draft guidelines. Here we expand on our thinking and present the most important takeaways from the Ethics Guidelines and our thoughts on what should happen next.
What’s good in the Ethics Guidelines?
Any attempt to formulate a set of ethics guidelines faces two significant challenges: on the one hand, the lack of ultimate consensus about ethical standards; on the other, the fact that guidelines are always only voluntary.
“But even after compliance with legally enforceable fundamental rights has been achieved, ethical reflection can help us understand how the development, deployment, and use of AI may implicate fundamental rights and their underlying values, and can help provide more fine-grained guidance when seeking to identify what we should do rather than what we (currently) can do with technology.” §40, Ethics Guidelines for Trustworthy AI, High Level Expert Group on Artificial Intelligence
The Guidelines tackles the first of these challenges by founding ethical principles on human rights. The four ethical principles, which it presents as imperatives for those developing and deploying AI systems, are the following:
- Respect for human autonomy
- Prevention of harm
- Fairness
- Explicability
These principles can draw force from their explicit grounding in human rights. Asking high-level ethical questions based on these four principles can be a way for those designing and deploying AI systems to reflect upon the potential ethical and human rights implications of their work. In this way, ethics and human rights can reinforce one another.
The second challenge for any set of ethical guidelines lies in their voluntary nature. Here again, the value of the HLEG’s approach is clear. Because the Guidelines’ ethical principles are based on fundamental rights as enshrined in the EU Charter of Fundamental Rights, the move from voluntary adherence to solid regulation should be more straightforward in the future.
Where ethics lacks the means of enforcement, international human rights law possesses well-developed standards and institutions as well as a universal framework for safeguards. Ethical principles grounded in human rights can take advantage of this well-established structure to ensure that AI is developed, deployed, and used in a manner that respects our fundamental rights.
What’s missing from the Ethics Guidelines?
First, the process to generate these guidelines has been misguided from the start. The European Commission created a clear imbalance in the composition of the HLEG by appointing a majority of business representatives in comparison to only a handful civil society representatives, thereby undermining its mandate to give “recommendations on future-related policy development and on ethical, legal, and societal issues related to AI”. In addition to inadequate stakeholder representation, the imbalance gave out-sized influence to industry concerns. This must be rectified in the following steps to ensure the expert group’s ability to formulate human-centric recommendations.
Beyond this issue of group composition and despite the strength of their ethical framework, there are several substantive points where the guidelines fall short. Trustworthy AI has three fundamental components: it should be lawful, ethical, and robust. While the Trustworthy AI guidelines discuss ethics at length, they miss at least one key issue concerning robustness and they entirely omit the legal component.
Regarding robustness, what the guidelines fail to address is the disturbing link between the recent scandals surrounding Big Tech and the advertising business model which dominates the industry — with or without the AI component. Although tech companies are eager to frame damaging or exploitative uses of their algorithms as the results of “hacks” or “bugs”, the reality is that the very aspects and incentives of these algorithms which make them so profitable are also what invite and facilitate these abuses.
In terms of the legal component of Trustworthy AI, the guidelines do state that AI systems must respect all applicable laws and regulations, but they acknowledge that law can often lag behind the pace of technological change. The HLEG assumes that ethics is best placed to fill this gap and provide guidance where existing rules fail to provide coverage.
While ethical reasoning and individual conscience certainly have a role to play, we believe that it is important to stress the role of human rights here. Beyond their embodiment in specific laws, human rights offer us a broad and well-defined set of principles to cover all instances in which our dignity and integrity are threatened.
The danger with focusing too much on ethics is that we lose sight of the ultimate goal of formalising the principles of Trustworthy AI into enforceable laws. Whereas the violation of an ethical principle can perhaps be written off as collateral damage, human rights offer us a set of principles which command respect and adherence in all circumstances.
The HLEG’s guidelines state that the Assessment List is not a series of boxes that need to be ticked, but rather an opportunity for open and sustained ethical reflection. Such an open reflection has its place, but only after we have ticked all the boxes for legal and human rights compliance. What is important now is to know what boxes need to be ticked in all situations.
What’s next?
For this ethical reflection to take the next step into action, the European Commission must now clarify how different stakeholders can test, apply, improve, endorse, and enforce Trustworthy AI.
For the foreseen piloting phase for the Assessment List of the Ethics Guidelines, the European Commission should consider and encourage various forms of testing and improvement of the operationalisation of Trustworthy AI. Civil society organisations will need additional resources and possibly financial capacity to meaningfully participate in and contribute to the piloting of the ethics guidelines, including identifying and addressing human rights concerns.
We urge the European Commission to recognise the need to lay down what Europe’s red lines are to prevent the development or deployment of AI in certain areas, and to see how we can ensure that Trustworthy AI is not just an empty brand name.
To counter the challenges that the design, development, and deployment of AI systems pose to our society, we complement our proposed recommendations with the following next steps for the European Commission:
- A full clarification and explanation of the legal component of Trustworthy AI through a comprehensive mapping of existing legislation that applies to AI development and deployment, and the identification of legal uncertainties and gaps;
- An update of existing legislation, where needed, particularly in the fields of safety, liability, consumer and data protection law.
- An evaluation and update of current enforcement mechanisms with regard to human rights compliance in both public and private deployment of AI;
- Further development of how human rights and algorithmic impact assessments should accompany the design, development and deployment of AI systems in an enforceable manner.
The human cost of automated decisions
AI evangelists claim that this technology will make all of our lives better and more efficient, but potential societal benefit cannot simply justify individual harm. Numerous voices have already pointed to how the implementation of AI “solutions” has necessitated more human drudgery, rather than alleviating it. Indeed, we increasingly hear of cases where much-hyped automated processes have been revealed to be “fauxtomation” or “artificial AI” which actually rely on the exploitation of cheap overseas labour to function properly.
The onus is on the proponents of AI to demonstrate that they are developing a technology we can trust and not just the latest brand of snake oil. If the benefits of AI come at the cost of our fundamental rights, or involve further marginalising and exploiting the most vulnerable members of our global society, then this is an ethical trade-off that we have to reject. To ensure that we move towards a better society where fundamental rights are extended to all and robustly protected, we in Europe have to build in human rights from the beginning.
This post was co-authored by Fanny Hidvegi and Daniel Leufer.