On 12 April, 2021, the European Commission launched the EU AI Act proposal, draft legislation to promote “trustworthy AI” in the EU. Working with our civil society partners, Access Now has provided expert recommendations at every stage of the negotiations to ensure the EU AI Act protects fundamental rights. This EU AI Act timeline is a summary of civil society input and updates on the draft law.
The use of artificial intelligence (AI) technology opens new risks for human rights, including for people and communities targeted for discrimination and marginalisation. Access Now advocates for AI regulations based on internationally recognised human rights principles. Below is a summary of our proposed amendments to the draft EU AI Act and a timeline of our related commentary and recommendations.
Access Now’s proposed amendments to the EU AI Act
Working in coalition with a group of civil society organisations, Access Now released a joint statement in November 2021 that calls for fundamental rights protections in the EU AI Act: An EU Artificial Intelligence Act for Fundamental Rights – A Civil Society Statement. In May 2022, Access Now worked with our coalition partners to publish the following proposed amendments, focusing on the issues the coalition highlighted:
- Future Proofing the Risk-Based Approach of the EU AI Act: proposes allowing the updating of all risk categories (unacceptable, high risk, limited risk) to adapt to a changing technology market. Drafting led by Access Now.
- Prohibit Emotion Recognition in the EU AI Act: proposes including a comprehensive prohibition on emotion recognition. Drafting led by Access Now, EDRi, and EDF.
- Prohibit Discriminatory Forms of Biometric Categorisation: proposes prohibiting biometric categorisation in publicly accessible spaces, and any inherently discriminatory biometric categorisation. Drafting led by Access Now and EDRi.
- AI in Migration and Border Contexts: outlines the need to update the EU AI Act to include prohibitions in the migration context, update the high-risk list, and amend Article 83 to ensure all high-risk systems in migration are regulated, including those as part of large-scale EU IT systems. Drafted by Access Now, EDRi, PICUM, Petra Molnar, and Statewatch.
- Prohibit Remote Biometric Identification (RBI) in Publicly Accessible Spaces: proposes expanding the limited prohibition by applying it to all uses of RBI in publicly-accessible spaces (real-time and post), by all actors, without exceptions. Drafting led by EDRi and Access Now.
- Strictly Regulate High-Risk Uses of Biometrics: Proposes ensuring that for permissible uses of biometric systems, rigorous safeguards and protections are in place to protect these sensitive data and mitigate the enhanced risks of processing using AI. Drafting led by EDRi and Access Now.
- Prohibit Predictive Policing: proposes a full prohibition on predictive policing systems to prevent discriminatory practices and undermining the presumption of innocence. Drafted led by Fair Trials and EDRi.
- Obligations on Users and Fundamental Rights Impact Assessments: proposes introducing obligations on users of high-risk AI, designed to achieve greater transparency as to how high-risk AI is used, and ensure accountability and redress for uses of AI that pose a potential risk to fundamental rights. Drafting led by EDRi.
- Ensure Consistent and Meaningful Public Transparency: proposes amendments to ensure transparency to the public as to which AI systems are used, when, and for what purpose. Drafting led by Algorithm Watch.
- Ensure Meaningful Transparency of AI Systems for Affected People: proposes amendments to ensure people affected by AI systems are notified and have the right to seek information when impacted by AI-assisted decisions and outcomes. In addition, proposes amendments to ensure Article 52 reflects the full range of AI systems requiring individual transparency. Drafting led by Panoptykon Foundation.
- Rights and Redress for People Impacted by AI Systems: proposes ensuring people affected by AI systems are adequately protected, and have rights and the availability of redress when their rights have been impacted by AI systems. Drafting led by EDRi.
- Ensure Horizontal and Mainstreamed Accessibility Requirements for All AI Systems: proposes recommendations on the inclusion of accessibility requirements in the development and deployment of AI systems. Drafting led by European Disability Forum.
- Sustainability Transparency Measures for AI Systems: proposes ensuring minimum transparency on the ecological sustainability parameters for all AI systems in the AI Act. Drafting led by Algorithm Watch.
- Set Clear Safeguards for AI Systems for Military and National Security Purposes: proposes ensuring that the scope of the AI act is not narrowed with a blanket exemption for national security. Drafting led by ECNL.
Access Now’s commentary and recommendations on the EU AI Act
In addition to our proposed amendments, we published the following to support our arguments for rights-based legislation: