As artificial intelligence proliferates across Europe and beyond, the unjust and harmful realities associated with its use are coming to light — from identifying and targeting peaceful protesters, to discriminating based on perceived gender. At the same time, a global movement around human rights protections and the banning of some AI applications is gaining momentum, forcing governments to scramble and craft new regulatory responses and think beyond fostering innovation.
Access Now’s latest report, Europe’s approach to artificial intelligence: how AI strategy is evolving, explores the actions EU governments are taking to promote what the EU calls Trustworthy AI, what this approach means for human rights, and how European AI strategy is changing, both for EU institutions and national governments. See the report snapshot for a summary.
“While AI has the potential to deliver benefits to society, it also causes irreparable harm, and impacts the human rights of millions across the globe. EU policy and strategy choices must show that the government will put people and their rights ahead of innovation at any and all cost,” said Fanny Hidvégi, Europe Policy Manager at Access Now. “At Access Now, we hope to see the EU continue its efforts as a global leader in protecting and promoting human rights, and the European Commission live up to this legacy as it develops the upcoming proposal on AI in 2021.”
The report’s key findings include:
- Governments around the world are adopting “Trustworthy AI,” often influenced by the EU’s approach. But in many cases, theory is yet to turn into practice.
- Discussions are moving from ethics to human rights, with growing calls to ban facial recognition. Taking an “ethics”-based approach to facial recognition and other dangerous applications of AI would leave millions exposed to potential human rights violations, and with little to no recourse.
- EU stakeholders have not reached consensus on how to regulate AI, yet there is significant divergence regarding what stakeholders want to see in the upcoming EU legislation and their respective national policies. Although there is a fear of over-regulation by some, the majority of stakeholders embrace and advocate for some EU intervention on AI.
- Stakeholders agree that transparency is a minimum requirement, both in the private and public sector, to attain Trustworthy AI, or to allow oversight and monitoring. While transparency is not enough for fundamental rights, a mandatory public register for public sector use of automated decisions is a must.
Published in partnership with the Vodafone Institute, the report is a follow-up to the 2018 report that mapped AI strategies across the EU, and covers policy developments in the past two years gathered through a series of roundtable discussions and interviews with key stakeholders, including government representatives and civil society.