Image: AI's human rights impact

EU Trilogues: The AI Act must protect people’s rights

A civil society statement on fundamental rights in the EU Artificial Intelligence Act 

As European Union institutions begin trilogue negotiations, civil society calls on EU institutions to ensure the Regulation puts people and fundamental rights first in the Artificial Intelligence Act (AI Act).

In Europe and around the world, AI systems are used to monitor and control us in public spaces, predict our likelihood of future criminality, facilitate violations of the right to claim asylum, predict our emotions and categorise us, and to make crucial decisions that determine our access to public services, welfare, education, and employment.

Without strong regulation, companies and governments will continue to use AI systems that exacerbate mass surveillance, structural discrimination, centralised power of large technology companies, unaccountable public decision-making, and environmental damage.

We call on EU institutions to ensure that AI development and use is accountable, publicly transparent, and that people are empowered to challenge harms:

1. Empower affected people with a framework of accountability, transparency, accessibility, and redress

It is crucial that the EU AI Act empowers people and public interest actors to understand, identify, challenge, and seek redress when the use of AI systems exacerbate harms and violates fundamental rights. To do this, it is crucial that the AI Act develops a framework of accountability, transparency, accessibility, and redress. This must include:

2. Draw limits on harmful and discriminatory surveillance by national security, law enforcement and migration authorities

Increasingly AI systems are developed and deployed for harmful and discriminatory forms of state surveillance. Such systems disproportionately target already marginalised communities, undermine legal and procedural rights, as well as contributing to  mass surveillance. When AI systems are deployed in the context of law enforcement, security, and migration control, there is an even greater risk of harm, and  violations of fundamental rights and the rule of law. To maintain public oversight and prevent harm, the EU AI Act must include:

  • A full ban on real-time and post remote biometric identification in publicly accessible spaces, by all actors, without exception;
  • A prohibition of all forms of predictive and profiling systems in law enforcement and criminal justice (including systems which focus on and target individuals, groups and locations or areas);
  • Prohibitions on AI in migration contexts to make individual risk assessments and profiles based on personal and sensitive data, and predictive analytic systems when used to interdict, curtail and prevent migration;
  • A prohibition on biometric categorisation systems that categorise natural persons according to sensitive or protected attributes as well as the use of any biometric categorisation and automated behavioural detection systems in publicly accessible spaces;
  • A ban on the use of emotion recognition systems to infer people’s emotions and mental states;
  • Reject the Council’s addition of a blanket exemption from the AI Act of AI systems developed or used for national security purposes;
  • Remove exceptions and loopholes for law enforcement and migration control introduced by the Council;
  • Ensuring public transparency as to what, when and how public actors deploy high-risk AI in areas of law enforcement and migration control, avoiding any exemption to the obligation to register high-risk uses into the EU AI database. 

3. Push back on Big Tech lobbying: remove loopholes that undermine the regulation

The EU AI Act must set clear and legally-certain standards of application if the legislation is to be effectively enforced. The legislation must uphold an objective process to determine which systems are high-risk, and remove any ‘additional layer’ added to the high-risk classification process. Such a layer would allow AI developers, without accountability or oversight, to decide whether or not their systems pose a ‘significant’ enough risk to warrant legal scrutiny under the Regulation. A discretionary risk classification process risks undermining the entire AI Act, shifting to self-regulation, posing insurmountable challenges for enforcement and harmonisation, and incentivising larger companies to under-classify their own AI systems.

Negotiators of the AI Act must not give in to lobbying efforts of large tech companies seeking to circumvent regulation for financial interest. The EU AI Act must:

  • Remove the additional layer added to the risk classification process in Article 6 restore the clear, objective risk-classification process outlined in the original position of the European Commission; 
  • Ensure that providers of general purpose AI systems are subject to a clear set of obligations under the AI Act, avoiding that smaller providers and users bear the brunt of obligations better suited to original developers.

Drafted by:

Access Now

Algorithm Watch 

Amnesty International 

Bits of Freedom

Electronic Frontier Norway (EFN)

European Center for Not-for-Profit Law, (ECNL)

European Digital Rights (EDRi)

European Disability Forum (EDF)

Fair Trials

Hermes Center 

Irish Council for Civil Liberties (ICCL)

Panoptykon Foundation 

Platform for International Cooperation on the Rights of Undocumented Migrants (PICUM)

Signed by:

Academia Cidadã – Citizenship Academy 

Africa Solidarity Centre Ireland 

AlgoRace 

Algorights 

All Faiths and None 

All Out 

Anna Henga 

Anticorruption Center 

ARSIS – Association of the Social Support  of Youth  

ARTICLE 19  

Asociación Por Ti Mujer 

Aspiration 

Association for Juridical Studies on Immigration (ASGI) 

Association Konekt 

ASTI asbl – Luxembourg 

AsyLex 

Austria human rights League  

Avaaz 

Balkan Civil Society Development Network 

Bulgarian center for Not-for-Profit Law  (BCNL)  

Bürgerrechte & Polizei/CILIP, Germany 

Canadian Civil Liberties Association 

Charity & Security Network  

Citizen D / Državljan D 

Civil Liberties Union for Europe

Civil Society Advocates 

Coalizione Italiana Libertà e Diritti civili 

Comisión General Justicia y Paz de España 

Commission Justice et Paix Luxembourg 

Controle Alt Delete 

Corporate Europe Observatory (CEO) 

D64 – Zentrum für digitalen Fortschritt 

D64 – Zentrum für Digitalen Fortschritt e. V. 

DanChurchAid (DCA) 

Danes je nov dan, Inštitut za druga  vprašanja 

Data Privacy Brasil 

Data Privacy Brasil Research Association 

Defend Democracy 

Democracy Development Foundation 

Digital Security Lab Ukraine 

Digital Society, Switzerland 

Digitalcourage 

Digitale Gesellschaft 

Digitalfems  

Diotima Centre for Gender Rights &  Equality  

Donestech 

epicenter.works – for digital rights 

Equinox Initiative for Racial Justice 

Estonian Human Rights Centre 

Eticas 

EuroMed Rights 

European Anti-Poverty Network (EAPN) 

European Center for Human Rights  

European Center for Not-for-Profit Law 

European Civic Forum 

European Movement Italy 

European Network Against Racism (ENAR) 

European Network on Statelessness 

European Sex Workers Rights Alliance  (ESWA) 

Fair Vote 

FEANTSA, European Federation of National  Organisations Working with the Homeless 

Free Press Unlimited 

Fundación Secretariado Gitano 

Gong 

Greek Forum of Migrants  

Greek Forum of Refugees 

Health Action International 

Hiperderecho 

Homo Digitalis 

horizontl Collaborative 

Human Rights Watch 

I Have Rights 

IDAY-Liberia Coalition Inc. 

ILGA-Europe (the European region of the  International Lesbian, Gay, Bisexual, Trans  and Intersex Association) 

info.nodes 

Initiative Center to Support Social Action  “Ednannia” 

Institute for Strategic Dialogue (ISD) 

International Commission of Jurists 

International Rehabilitation Council for  Torture victims 

IT-Pol 

Ivorian Community of Greece 

Kif Kif vzw 

KOK – German NGO Network against  Trafficking in Human Beings 

KontraS 

Kosovar Civil Society Foundation (KCSF) 

La Strada International  

Lafede.cat 

LDH (Ligue des droits de l’Homme)

Legal Centre Lesvos 

Liberty  

Ligali / IDPAD (Hackney) 

Ligue des droit humains, Belgium 

LOAD e.V. 

Maison de l’Europe de Paris  

Metamorphosis Foundation 

Migrant Tales 

Migration Tech Monitor 

Mnemonic 

Mobile Info Team 

Moje Państwo Foundation 

Moomken organization for Awareness and  Media  

National Campaign for Sustainable  Development Nepal 

National Network for Civil Society (BBE) 

National old folks of Liberia.com 

Novact 

Observatorio Trabajo, Algoritmo y Sociedad 

Open Knowledge Foundation Germany 

Partners Albania for Change and  Development 

Politiscope 

Privacy First 

Privacy International 

Privacy Network 

Promo-LEX Association 

Prostitution Information Center (PIC)  Protection International 

Public Institution Roma Community Centre 

Racism and Technology Center 

Red en Defensa de los Derechos Digitales  

Red Española de Inmigración y Ayuda al  Refugiado 

Refugee Law Lab, York University  REPONGAC  

SHARE Foundation 

SOLIDAR & SOLIDAR Foundation 

Statewatch 

Stichting LOS 

Superbloom (previously known as Simply  Secure) 

SUPERRR Lab 

SwitchMED – Maghweb 

Symbiosis 

TAMPEP European Network for the  Promotion of Rights and Health among  Migrant Sex Workers. 

TEDIC – Paraguay  

The Border Violence Monitoring Network 

The Good Lobby 

Transparency International 

Volonteurope 

WeMove Europe 

Xnet