This post on systemic racism is part of our three-part blog series reflecting on 2020 and the state of digital rights at the United Nations. Check out our previous posts on the world’s failure to meet Sustainable Development Goal 9.C on internet access and the need for transparency on the U.N. Tech Envoy selection and appointment process.
The past year marked a global reckoning with racism, and the conversation about how to address systemic racism is among the most important to emerge from 2020. Research continues to expose how some uses of new technology — including artificial intelligence — are building an invisible infrastructure for oppression, which can serve to entrench white supremacy and racial discrimination in our societies.
That is why today — on Human Rights Day — we are highlighting the mandate of E. Tendayi Achiume, the United Nations Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance. Achiume is at the forefront of efforts at the U.N. to eliminate racism globally, and she has identified technology as one of the most urgent areas for a “long-overdue interrogation of systemic racism.”
Earlier this year, the U.N. Human Rights Council responded to the police killing of George Floyd in the U.S. — and the global wave of #BlackLivesMatter protests that followed — by holding an urgent debate on the “current racially inspired human rights violations, systemic racism, police brutality, and violence against peaceful protest.” That discussion was critically important, but more important is ensuring the international community stays focused, listens to impacted communities, and pursues the kind of meaningful, concrete action that will stop targeting, discrimination, and violence based on race. The interrogation of systemic racism must include critical examination of processes at global institutions, including at the U.N. itself.
Below, we take a closer look at Special Rapporteur Achiume’s work and our efforts to support her mandate, highlighting pressing technology-related issues for racial justice and the future of human rights, including preventing AI-powered discrimination, targeted internet shutdowns and surveillance, abuse of digital technology in border control, and attacks on the rights to free assembly and association.
As the Access Now staff reflects on the past year, we are also reflecting on our own privilege and position in the conversation on racial justice. We continue to learn what it takes to fight racism with meaningful civil society action, as well as to “unlearn” what is not helpful or even perpetuates the damaging status quo. This is an ongoing process that we hope will enable us to become more effective in support of real and lasting change.
Fighting AI-powered discrimination
In June, Special Rapporteur Achiume submitted her report on racial discrimination and emerging digital technologies at the 44th session of the U.N. Human Rights Council. We contributed to the report during the consultation process in December 2019, and joined 84 civil society organizations and individuals in a statement to underscore the importance of its findings. The report finds that algorithmic technologies based on Big Data are “reproducing the discriminatory systems that build and govern them” and that “emerging digital technologies exacerbate and compound existing inequities, many of which exist along racial, ethnic, and national origin grounds.”
Our submission stresses the importance of using an international human rights law approach to regulating new technologies, citing the Toronto Declaration, the statement we led with Amnesty International on protecting the rights to equality and non-discrimination in machine learning that was launched during RightsCon Toronto in 2018.
Achiume makes it clear that emerging technologies do not exist in a cultural void, and instead reflect the racial and ethnic discrimination that exists in our societies. Addressing the myth that technology is neutral and objective, she calls for deeper analysis of the impact the use of emerging technologies has on equality and discrimination.
Following the report, Achiume joined a call for an immediate moratorium on the sale, transfer, and use of surveillance tech, which not only seeks implementation of robust human rights safeguards for these technologies, but also notes that “it will be necessary to impose outright bans on technology that cannot meet the standards enshrined in international human rights legal frameworks prohibiting racial discrimination.” We strongly support this call, which echoes similar calls by the U.N. High Commissioner for Human Rights and the former U.N. Special Rapporteur on Freedom of Opinion and Expression, David Kaye. We are not alone. Our latest report on AI, published in partnership with Vodafone, reflects growing support for such bans among diverse stakeholders in the European Union.
New technologies — particularly those typically labeled “artificial intelligence” — are anything but neutral. Gender Shades, an excellent resource by Joy Buolamwini and Timnit Gebru (the AI ethics researcher making headlines this week after Google fired her), explains how racial and gender bias can distort the operation of facial recognition systems. Similar research has exposed the discrimination and harm natural language generation tools, systems for content moderation, and other applications of machine learning can cause. While these findings have spurred calls for ensuring that AI-powered systems operate in ways that are fair, it is increasingly clear that there are limits to efforts to mitigate bias and that some applications of technology are so dangerous for human rights they demand outright prohibition.
Special Rapporteur Achiume’s report builds on a foundation of U.N. findings that Silicon Valley and governments around the world have largely overlooked. As we near the 20th anniversary of the 2001 Durban Declaration and Programme of Action of the World Conference against Racism, Racial Discrimination, Xenophobia and Related Intolerance, it is worth reflecting on some of its findings. Notably, recognizing the internet’s ability to spread tolerance and respect for human dignity, the Programme of Action called on states to discourage racism online and companies to hire staff that reflect the “diversity of societies.”
In 2020, only 3.9 percent of Facebook employees in the U.S. identified as Black. Research by Stephen Cave and Kanta Dihal on the “Whiteness of AI” points to the many ways in which the field of artificial intelligence fails to represent — or serve — Black people.
Stopping targeted internet shutdowns and surveillance
Of course, it is not just through use of artificial intelligence that technology can deepen race-based discrimination. In her reports to the U.N. Human Rights Council, Special Rapporteur Achiume gives several examples to illustrate how technology can be misused to harm people in racial and ethnic minority groups, especially when it is deliberately weaponized. One example is the continued government use of targeted internet shutdowns to silence specific communities, as documented in the 2019 #KeepItOn report. Among those targeted in this way: Rohingya Muslims, both in Myanmar and as refugees fleeing genocide in Bangladesh.
Another example is government collection and processing of individuals’ biometric data. In India and Kenya, this information is collected for digital identity programs that grant or deny people access to various public services, such as food rations, unemployment benefits, or the registry of birth certificates. Use of these systems often marginalizes or excludes people in racial and ethnic minority groups, as they face serious logistical, infrastructural, or physical hardship to secure access.
Protecting people across borders
Achiume further explores the ways governments use technology to marginalize and exclude targeted communities in a recent report to the U.N. General Assembly on race, borders, and digital technologies. The report examines government deployment of digital technologies, including drones, in the context of border enforcement and administration. It highlights tactics such as systematic invasion of privacy, discrimination, and exclusion based on citizenship and legal status, justified as measures for national security and counterterrorism worldwide.
We contributed to this new report in May 2020. Our submission focused on the experimental use of new technologies, surveillance methods, and data gathering tools at the border in the European Union and the United States, including initiatives like the E.U. “Smart Borders” Package and the U.S. government’s collection of social media information and smartphone data.
Not only does Achiume highlight concerns about government programs for border control, she also raises concerns about humanitarian agencies’ use of digital technologies to provide aid, an issue we have been following closely through our #WhyID campaign. The report highlights, for example, that in refugee camps in Afghanistan, the UNHCR, the U.N. refugee agency, “mandated iris registration for returning Afghan refugees as a prerequisite for receiving assistance.” We further note that the World Food Programme, in partnership with the UNHCR in 2016, introduced iris scan payment technology in the Zaatari and Azraq refugee camps in Jordan.
UNHCR has said its biometric prerequisite in Afghanistan is a justifiable means to prevent fraud. However, Special Rapporteur Achiume rightly observes that “the impact of processing such sensitive data can be grave when systems are flawed and abused.” The humanitarian sector, including leading voices within the U.N. system, must reevaluate their use of biometric data in identifying and surveilling those whose human rights are already severely threatened. A failure to do so will result in the continued and worsening exclusion of refugees and asylum seekers.
Defending Black-led movements and the right to protest
In the U.S., police have cracked down violently on #BlackLivesMatter protesters exercising their rights to free assembly and association, leading to a direct appeal to the U.N. for intervention. We believe the international community can and must do more to stop police violence, and earlier this month, we co-hosted a Side Event, directed at the U.N. General Assembly Third Committee, on Structural Racism, Police Violence & the Right to Protest. We were honored to have Special Rapporteur Achiume join a panel of activists, civil rights lawyers, and fellow U.N. leaders to discuss the global impact of systemic racism, with a special focus on law enforcement brutality, the lack of accountability for police violence, and the importance of upholding freedom of expression and assembly — whether online or off.
The road ahead
In a year of overwhelming challenges, including a global pandemic that in the U.S. disproportionately impacts Black and Latinx communities, Special Rapporteur Achiume has seized the opportunity to reinvigorate the interrogation of systemic racism that we desperately need. She has reminded all stakeholders — from governments to NGOs to the private sector — of the fundamental importance of using a structural, intersectional, human rights-based approach to combat racism and challenge discrimination in the design and use of emerging digital technologies. Her work is already being cited in the development of existing U.N. human rights resolutions, such as the Privacy in the Digital Age U.N. General Assembly Resolution led by Germany and Brazil, which was passed by the Third Committee and is headed toward adoption.
As we look ahead to 2021, we see the Special Rapporteur’s advocacy efforts as a roadmap to strengthening the mutually reinforcing rights to privacy, freedom of expression, equality, and non-discrimination, and to holding state actors accountable for addressing systemic racism in the tech sector and beyond. We look forward to continuing our work to support her critically important mandate.