Across the globe we are seeing examples of how artificial intelligence can be implemented in ways that can either benefit or hurt societies. In our new report, Human Rights in the Age of Artificial Intelligence, we look at the implications of the growth in AI-powered technologies through a human rights lens. In addition to the report, we also encourage you to review an accompanying case study which examines how AI is used to conduct surveillance.
Why human rights matter in the AI debate
Imagine you are a farmer struggling to maintain your small family farm. You have lost crops to drought and pests in recent years, and you’re thinking of selling the property because you are going into increasing debt. Luckily for you, in the past few years AI has paired up with increasingly affordable Internet of Things devices to enable precision farming. You install sensors in your fields and hook them up to an AI system that pulls together real-time data from the sensors and combines it with satellite imagery and weather data. The system helps you manage scarce water resources by identifying optimal times to irrigate, and helps you catch pest infestations and diseases before they spread. Your farm is now more productive than ever before, and you are no longer at risk of losing it.
AI development has taken off in recent years, and although it can be used in ways that benefit society — advancing the diagnosis and treatment of disease, revolutionizing transportation and urban living, and mitigating the effects of climate change — AI can also be used in ways that result in significant harm. The same data processing and analysis capabilities of AI that are used, for example, to measure and respond to demands on public infrastructure, can also enable systems for mass surveillance. AI can be used to identify and discriminate against the most vulnerable in society and may revolutionize the economy so quickly no job retraining program can possibly keep up. Additionally, the complexity of AI systems means that their outputs are often hard if not impossible to fully explain. We are deploying these opaque systems rapidly and often carelessly, yet use of AI for data analytics and algorithmic decision-making can have an immediate, negative impact on people’s lives, with the potential to hurt our rights on a scale never seen before.
So far, those seeking to prevent harm and mitigate risks have focused primarily on ethics for AI development and deployment, while paying scant attention to human rights. The human rights community, meanwhile, has only recently begun to consider the full range of risks of AI, and there is considerable uncertainty about how to conceptualize these risks. Fortunately, momentum is now building behind human rights as a foundation for the AI debate. Human rights can complement existing ethics initiatives. These rights are universal and binding, and more clearly defined than ethical principles. And because they are codified in international law and institutions, human rights can provide well-developed frameworks for accountability and remedy. By invoking human rights, we can address some of the most egregious societal harms caused by AI, and prevent such harms from occuring in the future.
How AI poses risks to human rights
Imagine you are in the immigration line about to enter a foreign country. You are traveling there legally for a conference, and you happen to resemble members of an ethnic separatist group the government considers dangerous. You reach the immigration booth and an automated camera that you don’t see takes a photo of you. That photo is analyzed using facial recognition software that categorizes you into a personality type based only on your face. The results pop up on the immigration officer’s computer screen, showing that the analysis found there is a high likelihood that you belong to a terrorist group. You are taken to another room where you are interrogated, searched, and then left alone for hours. You have been detained, and you have no idea why.
As a first step in our effort to bring human rights to the AI debate, Access Now and our partners developed and published the Toronto Declaration on protecting the rights to equality and non-discrimination in machine learning. However, the right to non-discrimination is not the only human right implicated by AI. Human rights are interdependent and interrelated, and AI affects nearly every internationally recognized human right, from the rights to privacy and freedom of expression, to the rights to health and education. In our report, we examine how current and near-future uses of AI could implicate and interfere with these rights.
While many of the human rights risks posed by AI are not new to the digital rights space, the scale at which AI can identify, classify, and discriminate among people magnifies the potential for human rights abuses in both reach and scope. Our paper explores how AI-related human rights harms disproportionately impact marginalized populations. This is primarily due to the fact that training data fed to AI systems reflects the history of marginalization of these groups throughout history. This bias is then reproduced in outputs that can entrench these patterns of marginalization.
How to prevent and mitigate AI-related human rights risks
Imagine you are living in a big city when a massive earthquake strikes. The building you are in partially collapses, injuring you severely and trapping you inside. The whole city suffers significant damage, and emergency response services cannot be everywhere at once. Luckily, they use an AI-powered system designed to help emergency response centers better respond to earthquakes. Within a few minutes the system has predicted the damage level in your city down to the block level. It shows emergency responders which areas have been hit the hardest, allowing them to prioritize responses accordingly. They deploy quickly to your neighborhood and you are safely rescued from the rubble.
Government regulation and corporate practices that take a human rights-based approach can help prevent and mitigate harms from use of AI. For example, comprehensive data protection laws can address many of the key human rights risks posed by AI. We also recommend other approaches for mitigation of these risks, both for the government and the private sector. AI is a large and diverse field, and not all uses of AI carry equal risk. The actions required to prevent or respond to AI-enabled human rights violations will depend on the specific facts and context. We call for ongoing research on the uses of AI and their potential to impact human rights, supported by government actors, the private sector, and other stakeholders. Emphasis should be placed on identifying and building response mechanisms for potential threats to ensure mitigation of negative effects.
Civil society also has a role to play
Imagine you are an activist in a civil war zone dedicated to documenting war crimes. You risk your life daily, and you have often lost cameras and smartphones with footage of evidence. You rely on regularly uploading your videos to a streaming service to keep the videos safe and ensure they are publicly available. However, sometimes the streaming service takes down your videos because the AI system it uses for content moderation thinks your video is violent content that violates its terms of service. Because of this, you have lost hundreds of videos documenting atrocities against innocent civilians that are often the only evidence of these crimes.
Civil society must start paying attention to the role of AI in the areas in which they work. AI often lurks beneath the surface of existing processes, and therefore journalists and human rights organizations play an important role in exposing irresponsible or harmful AI systems. We should also be looking to the future.
While the debate over AI and ethics could serve to stall some of the most reckless implementations of AI in some countries, in others the conversation is only beginning. Civil society can help governments and companies avoid the mistakes that have already been made and are documented. Unfortunately, authoritarian-leaning governments around the world will likely have few qualms in rushing the use of untested, problematic AI-powered systems if they are perceived as helping these regimes to retain power and control. China is a primary player in AI development, and an innovator in domestic implementation of AI for surveillance and repression. We can expect to see China export these capabilities, just as they have exported other pieces of internet and telecommunications surveillance infrastructure.
It is up to governments and companies that value and respect human rights to take leadership in this area, which includes working with civil society partners, such as human rights organizations, in an open and transparent manner to ensure that the development of AI is user-centered and human rights-based. If we remain vigilant and act decisively to shed light on uses of AI that violate human rights and hurt our societies, perhaps together we can prevent the worst uses of AI from spreading.
You can learn more about our investigation of uses of AI and recommendations to protect human rights in the full report.