The European Commission’s consultation on the “White Paper on Artificial Intelligence — a European approach to excellence and trust” is closing on Sunday, June 14. In its current form, the policy approach of the EU will bring about neither trust nor excellence in automated decision-making (ADM) / artificial intelligence (AI) systems, and will do nothing to ensure that both the private and public sector respect and promote human rights in the context of artificial intelligence.
Access Now has responded to the consultation and has put forward recommendations, and we urge human rights experts and civil society organisations to make their voices heard.
Do not promote the indiscriminate uptake of artificial intelligence
The uptake of any technology, particularly in the public sector, should not be a standalone goal and it is not of value in itself. AI-based approaches to problems have advantages in certain cases but also carry unique risks. As such, AI is not the right solution in all cases and should not be viewed as a panacea. In cases where there are no serious negative impacts and there is evidence of real benefit, AI-based systems can be considered as an option alongside other approaches, but we must ensure that policy makers are not led astray by marketing slogans and unfounded AI hype.
The European Commission – without clearly demonstrating why – evidently wants more AI uptake, so it is willing to make some effort to ensure that the technology is trustworthy by mitigating risks. Instead, the EU should earn people’s trust for its AI initiatives by putting the protection of fundamental rights ahead of concerns about global competitiveness in AI. The primary objective should be to avoid individual and societal harms, not to mitigate them.
The EU should acknowledge that the public procurement of AI systems offers an opportunity to enforce high standards on AI systems and thereby contribute to its goal of creating an ecosystem of trust and excellence. Any use of AI systems in the public sector should be subject to especially high standards of transparency and every measure should be taken to ensure that fundamental rights are protected. Public procurement processes should not become tools to incentivise indiscriminate uptake of AI/ADM systems.
Implement a rights-based approach with mandatory human rights impact assessments
Access Now believes that a human rights-based approach is essential to ensure the EU’s attempt to build trustworthy AI actually deserves our trust and is not just an empty brand name.
As opposed to a binary risk assessment approach, Access Now argues that for all applications in all domains, the burden of proof should be on the entity wanting to develop or deploy the AI system to demonstrate that it does not violate human rights via a mandatory human rights impact assessment (HRIA). This must apply both for the public sector and for the private sector as well, as part of a broader due diligence framework.
Establish public registers for AI/ADM systems
Without the ability to know whether AI/ ADM systems are being deployed, all other efforts for the reconciliation of fundamental rights and AI/ ADM systems are doomed to fail. Access Now and Algorithm Watch therefore jointly call for a mandatory disclosure scheme for AI/ADM systems. We ask for new EU legislation to mandate that member states establish public registers of AI/ADM systems used by the public sector, and, in certain cases, by the private sector.
Such registers should be used to make public the results of Algorithmic Impact Assessments (AIA) / Human Rights Impact Assessments (HRIA). They should come with the legal obligation for those responsible for the AI/ADM system to disclose and document the purpose of the system, an explanation of the model (logic involved), and the information on who developed the system. The system should include a notification and coordination mechanism for relevant authorities and the potential new center of expertise (see section V below). The register system should complement the minimum standards provided by national freedom of information laws and transparency requirements of public procurement processes.
Whereas disclosure schemes on AI/ADM systems should be mandatory for the public sector in all cases, these transparency requirements should also apply to the use of AI/ADM systems by private entities, when an AI/ADM system has a significant effect on an individual, a specific group, or society at large, and should include a mandatory notification requirement to the relevant authorities.
Ban applications that incompatible with fundamental rights such as biometric technologies that enable mass surveillance
The EU must make it an explicit policy objective to stop or ban applications of automated decision-making or AI systems in areas where mitigating any potential risk or violation is not enough and no remedy or other safeguarding mechanism could fix the problem. This approach is in line with the basic values of the European Union built on the Treaties and the EU Charter of Fundamental Rights.
Mass surveillance constitutes one of the most egregious violations of our fundamental rights and freedoms and must be banned outright. If the EU truly wants to show leadership in promoting rights-respecting, trustworthy AI, then it must ban the development and deployment of such applications of AI. The EU cannot remain in the race with China and the US when it comes to developing mass surveillance technologies and still claim to promote trustworthy AI. Rather, the EU must establish red lines to ban applications of AI which are incompatible with fundamental rights:
- indiscriminate biometric surveillance and biometric capture and processing in public spaces or installed on wearable devices;
- use of AI to solely determine access to or delivery of essential public services (such as social security, policing, migration control);
- uses of AI which purport to identify, analyse and assess emotion, mood, behaviour, and sensitive identity traits (such as race, disability) in the delivery of essential services;
- uses of AI to make behavioural predictions with significant effect on people based on past behaviour, group membership, or other characteristics such as predictive policing;
- use of AI systems at the border or in testing on marginalised groups, such as undocumented migrants;
- use for autonomous lethal weapons and other uses which identify targets for lethal force (such as law and immigration enforcement);
- use for general-purpose scoring of citizens or residents, otherwise referred to as unitary scoring or mass-scale citizen scoring; and
- applications of automation that are based on flawed scientific premises, such as inferring emotion from facial analysis.
Access Now, along with the other members of the European Digital Rights (EDRi) network, has already laid down one clear red line: the use of biometric data processing and capturing in publicly accessible spaces. Such uses of biometric data significantly contribute to unlawful mass surveillance and therefore should be banned, as outlined in the EDRi network paper Ban Biometric Mass Surveillance: A set of fundamental rights demands for the European Commission and Member States. The paper establishes that such uses will transform public spaces into sites of continuous watching and irreversibly compromise fundamental rights to privacy, freedom of assembly, expression, non-discrimination, data protection, fair trials, democracy, and the presumption of innocence.
Establish national centers of AI expertise to help existing regulators
Access Now and AlgorithmWatch jointly call for the establishment of independent centres of expertise on AI on a national level to monitor, assess, conduct research, report on, and provide advice to government and industry in coordination with regulators, civil society, and academia about the societal and human rights implications of the use of automated decision-making and AI systems. The overall role of these centers is to create a meaningful accountability system that links the objectives for the “ecosystem of excellence” and the “ecosystem of trust”.
The center should be an independent statutory body that would have a central role in coordination and policy development and national strategy relating to AI, and help build the capacity of existing regulators, government, and industry bodies to respond to the increased use of AI systems.
These national centres of expertise can provide oversight for investment and research funding according to public-interest criteria, as well as issuing guidelines for the development, procurement, and deployment of AI systems in different sectors. The national centres of expertise should further support small and medium-sized enterprises (SMEs) in fulfilling their obligations under human rights due diligence, including the aforementioned step of conducting a human rights impact assessment (HRIA,) and in registering ADM/AI systems in the public register discussed in Section III. This support of SMEs can be bolstered by involving civil society organisations, stakeholder groups and existing enforcement bodies such as DPAs and National Human Rights Bodies, in the centres of expertise. Ultimately, such collaboration between diverse stakeholders will benefit all aspects of the ecosystem and build trust, transparency and cooperation between all actors.
Although these centres of expertise should not have regulatory powers, they can provide essential expertise to aid and coordinate among regulatory bodies, human rights bodies, data protection authorities, etc, in their work. They should also monitor and publish quarterly reports on enforcement across sectors. Individual complaints, collective redress mechanisms, and ex officio investigations related to violations or risks in the jurisdiction of existing human rights bodies or other regulators should remain in their powers.
Enforce high scientific standards
According to the High Level Expert Group’s Ethics Guidelines for Trustworthy AI, it is essential that Trustworthy AI is robust, meaning that an AI system should “perform in a safe, secure and reliable manner, and safeguards should be foreseen to prevent any unintended adverse impacts.” Fundamental to this aim is that AI developed and deployed in the EU conforms to high scientific standards, as no system can be safe, secure, and reliable if it is based on flawed scientific premises. Unfortunately, this has not been the case to date, not only with AI applications developed by private companies, but also with AI applications funded by the EU (such as a controversial AI “lie detector” funded under Horizon 2020, and further investments in scientifically baseless emotion detection applications). We therefore call on the EU to enforce high scientific standards for AI designed, developed, and deployed in the EU.
Conclusion
Access Now welcomes the opportunity to submit a response to the public consultation on the “White Paper on Artificial Intelligence — a European approach to excellence and trust”.
We published our draft response a few weeks ago and joined a number of events to encourage human rights experts and civil society organisations in the digital rights community and beyond to engage with this consultation. This is just the beginning of the policy and lawmaking process but the Brussels conversation has been painfully missing the voices of affected people and communities. While we don’t claim to represent all civil society or users at risk, we at Access Now are committed to facilitate your participation, channel your views, and amplify your opinion in this EU debate.
Access Now has been working globally to advance the rights of users with respect to data protection, privacy, and digital security since 2009. The emergence and increasing reliance on artificial intelligence, automated decision-making processes, and profiling raise some of the most challenging issues of the 21st century for human rights. Our goal is to continue to work together with affected communities, civil society, and academics and experts, both in the private and public sector, to develop sound policy recommendations for the stakeholders involved in regulating the use and development of AI.