From October 2019 to July 2020, I was hosted by Access Now as a Mozilla Fellow. During this time, I worked on a project to develop resources to counter the hype, misconceptions, myths, and inaccuracies about “artificial intelligence.” The result of that research is a new website called AI Myths, which was launched during RightsCon 2020.
This website provides resources to debunk eight of the most harmful myths and misconceptions about artificial intelligence, from the idea that AI can solve any problem to the misguided belief that AI systems can be objective or unbiased. As AI systems are leveraged across domains — from detecting hate speech to allocating social welfare benefits — civil society organizations increasingly have to address the role of AI in their work. This often means having to combat hype and overselling on the part of companies pushing their products and governments looking for quick-fix solutions. The goal of this project is to help civil society organizations and others cut through the most common misconceptions, so they can understand how these systems work and ensure they don’t undermine people’s rights.
In this post, I reflect on how the project came about, and explain how it links into the work I took part in during my fellowship and will now engage in as I join the staff at Access Now.
Birth of a project: coordinating civil society’s work on AI and human rights
At RightsCon Tunis in 2019, I took part in a “Solve my Problem” session that brought together representatives from civil society, international institutions, governments, and companies. What connected everyone in the room was that we were all working to ensure that AI development and deployment respects human rights.
The problem that we wanted to solve was how to coordinate better in our work on AI and human rights. Together we identified a shared roadblock: the huge amount of time we spent refuting misconceptions about AI, including attempting to clarify the vague term “artificial intelligence” and interrogating the idea that any regulation of AI will necessarily kill innovation.
After brainstorming to develop a list of the most harmful misconceptions, myths, and inaccuracies about AI, we discussed how best to tackle them. At the time, I was applying for the Open Web fellowship from the Mozilla Foundation and the Ford Foundation, and busting myths about AI fit in perfectly with my project proposal.
When I received the stupefying news that I had been granted the Mozilla fellowship, I set to work consulting with more stakeholders on the resources they needed to help in their work on AI, and published a call for submissions to get broader feedback on myths about AI.
Ultimately, there seemed to be clear consensus on what were the most important myths to debunk, and I then began to contact experts for input and research material. In addition to the work with external experts on all of these topics, AI Myths was informed and strengthened by Access Now’s work on AI. Below, I highlight some of the ways that the project intersected with that work.
Will AI ethics guidelines save us?
In a survey I conducted on AI myths and misconceptions, one of the issues that got the most votes as a myth to address concerned “AI ethics washing.” The phrase has been used to critique companies that are viewed as leveraging vague ethical guidelines as a way to dodge regulation. On the AI Myths site, the essay “Myth: Ethics guidelines will save us” explores this issue in detail.
Thomas Metzinger made what is perhaps the most famous recent accusation of AI ethics washing. A philosophy professor and member of the European Union’s High Level Expert Group on AI (HLEG AI), Metzinger accused the HLEG AI of ethics washing because members of the group had weakened the “Ethics Guidelines for Trustworthy AI,” likely due to industry dominance within the group.
Access Now’s Europe Policy Manager, Fanny Hidvégi, also a member of the HLEG AI, has been critical of the role of ethics in AI governance throughout Access Now’s engagement with the group. As Access Now noted when the HLEG’s ethics guidelines were first published, ethical principles are vague and because they are voluntary, there is no strong incentive or obligation for companies and other actors to adhere to them.
Instead of promoting the development of ethics guidelines, Access Now has consistently advocated for applying human rights safeguards to the development and deployment of AI technologies. In addition to publishing a report on human rights and AI in 2018, Access Now joined Amnesty International in leading the drafting of the Toronto Declaration – Protecting the right to equality and non-discrimination in machine learning systems, a significant step toward promoting the international human rights framework as an alternative to weak and vague AI ethics principles.
As Hidvégi explained, “The design and deployment of AI must be individual-centric and respect human rights. Trustworthy AI can be a step in the right direction but only if the guidelines are matched by the development and enforcement of adequate safeguards.”
Will AI regulation kill innovation?
In addition to pushing for human rights safeguards for the development and deployment of AI systems, Access Now has consistently noted that there are cases in which safeguards and risk mitigation are not sufficient to protect people from the harms caused by AI systems, and in these cases, regulation, including the banning of some uses, may be warranted. This stands in contrast to the position of companies and other actors whose representatives repeatedly warn that regulation of AI will kill innovation. “Myth: We can’t regulate AI” addresses this issue.
When the European Commission released its “Whitepaper on Artificial Intelligence – a European Approach to Excellence and Trust,” which is aimed at providing a blueprint for a new regulation on AI, Access Now took part in the consultation process and made six recommendations, one of which is to ban applications of AI that are incompatible with fundamental rights. Rather than simply addressing harms when and where they arise, Access Now stressed that the European Union “must make it an explicit policy objective to stop or ban applications of automated decision-making or AI systems in areas where mitigating any potential risk or violation is not enough and no remedy or other safeguarding mechanism could fix the problem.”
As a concrete example of what this approach means, Access Now, along with the other members of the European Digital Rights (EDRi) network, have established a clear red line: governments must ban use of biometric data processing and capturing in publicly accessible spaces. As outlined in the EDRi paper, Ban Biometric Mass Surveillance: A set of fundamental rights demands for the European Commission and Member States, such uses of biometric data significantly contribute to unlawful mass surveillance. Indeed, we would argue this is an example of “innovation” that regulation must stamp out to protect our rights and freedoms.
The problems AI can’t solve
As a global organization, Access Now is closely following the misuse of AI technologies in countries around the world, including Brazil. In June 2020, the organization submitted an expert opinion as part of a case the Brazilian Institute of Consumer Protection (IDEC) filed in a public civil action against São Paulo Metro operator, ViaQuatro, regarding the installation and use of an “AI crowd analytics” system from AdMobilize that claims to predict the emotion, age, and gender of metro passengers.
As outlined in the expert opinion, the idea that we can use AI systems to “detect emotion” or “detect gender” is false. There is no scientific evidence basis for detecting emotion from facial expressions. Furthermore, it is impossible to “detect” gender by analyzing video footage using AI technology, because gender is not determined by physical characteristics such as facial features. Systems like the one in this case assign gender according to a male-female binary that discriminates against trans and non-binary people.
The case serves to highlight problematic applications of AI technologies, where developers use machine learning to address problems that are impossible to solve using these approaches. There are in fact limits to what AI can do, a theme explored in “Myth: AI can solve any problem.”
Promoting AI uptake — but for what purpose?
Another issue that blocks productive work on human rights and AI is the vagueness of the term itself. In “Myth: The term AI has a clear meaning,” I discuss how use of the term “artificial intelligence” leads to confusion, as it can refer to technologies and use cases as diverse as natural language text generation (think of the “autocomplete” capabilities for your email) and facial recognition systems that enable mass surveillance.
While both leverage machine learning algorithms, their impact on our fundamental rights is vastly different. This leads to situations in which proponents of adopting AI without protections for human rights cite relatively banal and harmless technological advances, such as the development of customer service chatbots, to defend unregulated use of facial recognition systems.
Access Now has highlighted the danger of this ambiguity in response to the European Commission’s commitment to (vaguely) “promote AI uptake” in the European Union, as well as to various stakeholders’ claims that the E.U. (or any particular country) must refrain from protecting human rights to “stay competitive in the AI race.”
As noted in Access Now’s submission to the consultation on the Whitepaper on AI, the “uptake of any technology, particularly in the public sector, should not be a standalone goal and it is not of value in itself.” There is no “AI race” in the sense of a zero-sum game: it is entirely possible for a country to decide not to develop some applications of AI, such as biometric surveillance, while remaining competitive in other domains, such as natural language processing.
AI — or more specifically, machine-learning-based technologies — have certain uses and advantages, but they also carry certain risks. We cannot expect them to solve all our problems, and anyone crafting public public policy should be aware of how use of these technologies can lead to harms of their own and/or exacerbate existing harms.
Conclusion: debunking myths is just the starting point
The AI Myths site is a resource to help those shaping public policy bypass the myths and misconceptions about AI that frustrate the evidence-based debate and informed decision-making that benefits our societies. But debunking myths alone will not ensure that AI technologies do not undermine or violate our fundamental rights. Every week brings news of AI systems rolled out without consideration of the risks for human rights, and without even the most minimal safeguards to prevent or mitigate harm. Worse, in many cases, from spurious gender detection apps to highly dangerous systems that claim to “predict criminality,” those implementing the systems in question do not appear to consider whether the problems these “solutions” purport to address are even solvable using machine learning.
Instead of adopting blind techno solutionism, governments and companies alike should take clear-eyed and necessary steps to ensure that AI systems are used only when they are actually appropriate as a solution, and that when they are used, they center and respect people and their rights. Key to this process is ensuring the inclusion of civil society in decision-making processes. AI Myths would not be what it is without the inspiration and research input from Access Now and other civil society organizations that are working to ensure that our rights are protected as AI-powered technology is developed. My hope is that those who are in a position to influence AI policy for the better can take full advantage of their collective knowledge and insight.