This month, Access Now launched a report that maps and analyses strategies and proposals for regulation on artificial intelligence (AI) in Europe. The report covers regional strategies from the European Union and the Council of Europe as well as national plans from several member states including France, Finland, Germany, and Italy. Access Now lays out a criteria to assess AI strategies to make sure that the development and deployment of AI is individual-centric and human rights-respecting. Read the full report here.
Joining the AI race
The race is on to develop artificial intelligence, and Europe has joined in. With one eye on competitors from Silicon Valley to China, both individual member states and the European Union have announced “AI strategies,” which funnel money into education, research, and development to kick-start European AI.
World leaders from Moscow to Washington to Beijing have been engaging in a frenetic AI race, and the fear of lagging behind is real. While everyone seems to agree that Europe must jump onto the AI bandwagon, no one seems to know where this train is going. Are countries forgetting to ask themselves the crucial questions: What type of AI do we really want for our society? What impact will this technology have — or is already having — on people’s rights, on people’s lives? Are there social areas where the decision is too important or sensitive to leave to a machine at all? After all, there might not be a single race for AI but multiple ones, going in opposite directions.
Governments already have to address all these questions for concrete applications of AI in place. They will need to decide where existing laws and enforcement bodies are equipped to address these risks, and where tweaks are required — whether regulators need more tools or regulation needs to be brought up to date. This does not necessarily mean stifling innovation. As with automotive safety in the 20th century, creative regulation could become a mark not of European bureaucracy but of European quality. The EU has demonstrated a robust tradition of protecting human rights and effective regulation — through, for example, the EU Charter and European Convention on Human Rights, world-leading data rights in the General Data Protection Regulation (GDPR), and products liability rules — and should see this as an asset.
What are the European strategies telling us?
In our report, we assessed the European strategies on AI against a set of principles and human rights that are most relevant in the context of development, deployment, and use of AI to be able to benchmark and compare these documents. We developed a list of criteria based on the principles and rights explicitly mentioned in the strategies on the one hand, and the most widely acknowledged relevant issues that are impacted by AI on the other. This list includes 10 principles and rights, from transparency and accountability to the rights to privacy, data protection, and free expression, as well as larger collective and economic rights.
Overall, the theme of “ethics in artificial intelligence” runs through most of the AI strategies. A number of entities note that ethics, done properly, can support the existing legal framework and provide answers to some challenges raised by the use of AI. More concerning are strategies which, via paying a nod to “ethics,” mainly express willingness to loosen the regulatory environment. Authorities should be vigilant that ethics do not become a smokescreen for an unregulated technical environment.
Underpinning this preference for ethics appears to be a sense among states and experts that it is too soon to codify AI-specific regulation. This is partly due to the fact that some of the key challenges linked to the use of AI may prove to have technical solutions that make regulatory change unnecessary.
Overall, most published strategies contain at least a nod to many crucial areas where AI will implicate human rights — in particular transparency, accountability, privacy, and the future of work. The most widely reported issue areas are use cases with discriminatory impact, in particular in the criminal justice system, on the one hand, and privacy and data protection implications on the other. Simultaneously, these are the fields where academics, human rights organisations, and regulators have already been looking into solutions. The existing human rights and data protection principles — if backed up by enforcement resources and a will for accountability by public bodies — are already effective tools to manage AI for the public good. The GDPR is itself so young that its effect on the shape of AI is as yet untested, but it could provide for interesting case studies in the future.
Despite this, we have found significant gaps in the consideration of human rights in a large number of AI strategies, and we proposed a series of recommendations to address this issue. Our overarching comment is that Europe should aim for a consolidated approach to the regulation of AI that is sensitive to the various contexts in which AI is already being developed and used, and to be sure it is applied and enforced in a consistent, rights-respecting way across the Union.
The European way forward
While some countries like Russia and China seem to have mostly military developments in mind, the EU has the potential to lead the development of a human-centric AI by reaffirming its values and putting adequate safeguards in place for rights. By doing so, Europe has an opportunity to define the direction of AI innovation, one that hopefully can truly work toward AI for Humanity.
Tech development has been an area where Europe has looked at the US with envy. The free-wheeling regulatory environment in the US has often been portrayed as a major reason tech companies, now at the forefront of AI development, became so wildly successful. However, the grim realities of internet shutdowns, walled gardens, censorship, zero-day exploits, hate speech, data protection and privacy violations, and disinformation are increasingly threatening to overshadow past promises and put in jeopardy the internet’s transformative powers to realise human rights.
Meanwhile, it has become common to hold up China with its sinister system of “social credit” as the exemplar of an AI-powered dystopia to avoid. It is simpler to say, however, that the pluralist democracies of the EU would never go down China’s path than it is to spot and avoid problematic local equivalents, especially when, regardless of human rights violations, China’s investment efforts in AI are highly praised. If an insurance company data mines your social media posts to assess your lifestyle and risk level, and therefore set your fees, is that not a soft, privatised form of social credit scoring? The Chinese case also serves as a reminder that the world’s largest data companies — including those at the forefront of AI development — evolve over time, and not always for the better.
Europe’s challenge will be to develop artificial intelligence policy that promotes innovation but steers between the “Wild West” approach that characterised the early Silicon Valley era and the statist approach of China. This is not a simple gold rush, nor is it a doomsday scenario that requires iron-clad regulation across the board. Rather, every socially significant use of artificial intelligence should be assessed in context, critically judged for its effect on European rights and freedoms, and regulated accordingly.
To get this right, it will also be essential to expand the debate well beyond the corners of the specialist technological community. Citizens are also waking up to the potential misuse of their data and at the same time realising the opportunities of AI. It will be crucial to ensure their participation in the debate.
While states and regulators may always be playing catch-up with technological change, that is no reason to cede the regulatory field. Human rights-anchored, ethical, and legal principles that guide a better, more tailored offer are possible. It is possible for European regulation to protect citizens from trading their right to a private life to use essential internet services because they think they have no other choice. This — developing smart AI regulation that keeps the human factor at the centre of the frame — could and should be Europe’s unique offer.
Read our report here.
This report would not have been possible without the support of the Vodafone Institute for Society and Communications.