|

AI and disinformation: our contribution at UNESCO’s Mobile Learning Week

On Tuesday, Access Now participated in a workshop on artificial intelligence and disinformation, hosted by the United Nations Educational, Scientific and Cultural Organization (UNESCO). The workshop was part of UNESCO’s annual Mobile Learning Week.

The discussion focused on two main aspects of the relationship between artificial intelligence (AI) and disinformation: the use of AI by governments and private companies to stop the spread of disinformation, and the use of AI in creating the disinformation itself. Workshop participants were asked to identify two main problems and two solutions or recommendations on this dynamic. Our contribution was built on our joint report with European Digital Rights (EDRi) and Liberties on Informing the misinformation debate, which we published in October 2018.   

We still don’t have solid research

One of the biggest issues is a general failure to understand the real effect of disinformation. Any initiatives or efforts aimed at combating disinformation should be evidence-based policy solutions, using clear empirical data of actual harms that are of a scale that merits intervention. Despite major scandals — such as the one sparked by Cambridge Analytica, the firm that mined the data of over 50 million people, which was then leveraged for targeted ads to influence the 2016 U.S. presidential elections — there is still significant uncertainty and a lack of evidence on the impact or influence of the use of technology for disinformation campaigns. The alleged problems and the potential impact on human rights is still not very well understood. More quantitative, empirical research is needed to craft solutions that respect human rights.

Flawed business models

The second major issue is the business model behind the accounts and platforms used to spread disinformation. These business models are based on the promotion of sensationalist news as a means of competing in the market for individuals’ attention. They employ micro-targeted surveillance for advertising, using users’ data as the basis for decisions about the content that they see in their news feeds,  delivering what will likely appeal to them and lead them to subsequently click on or otherwise engage with the content. The algorithms used to exploit the collected behavioral data, create analytics, and advertising exchanges, are leveraged for cluster detection and tracking social media sentiment. This would not be possible if there was a sufficient legal framework for protecting user data and the enforcement of that framework.

Companies should move away from these models that rely on sensationalism and shock, and civil society and governments need to work together to address the business models of online manipulation in a holistic, comprehensive way, including through crafting appropriate data protection, privacy, and competition laws. It is insufficient to encourage platforms to adopt mechanisms of removal or verification (such as flagging and “disputed tags”), if the fundamental business model of the platform itself facilitates or propagates the problem.

Botched government initiatives to stop alleged “fake news”

Finally, oftentimes government-backed legal initiatives created ostensibly to fight issues that are labeled “fake news” are usually yet another legal block to the free flow and exchange of information and speech. Combating “fake news” has indeed become a standard tool used to stifle free expression in many countries, including countries such as Egypt, which criminalizes vaguely defined “fake news” by law, and Bangladesh, a country that shut down the internet during its last elections in December 2018. We explicitly advise against the use of the term fake news. The muddy definition for this alleged “fake news” can too easily lead to too many false positives for content take-downs, and as a result, the broad, often baseless infringement of the fundamental right to free expression. We should address the issue of disinformation, misinformation, government propaganda, and other issues with accuracy, and proportionately.

Data protection: a necessary tool

But it is not only the right to freedom of expression that is at stake. We cannot separate this right from the rights to privacy and data protection: a strong tool that can act as a shield to stop the flow of disinformation worldwide. If the illegal collection and access to users’ data is stopped, micro-targeted disinformation campaigns would lose much of their effectiveness and threat potential. As is already clear, weak data protection rules and enforcement not only impact user privacy and choice, but also lead to constant monitoring, profiling, and “nudging” towards political and economic decisions.

Transparency and limitation of behavioral advertising for political purposes, as well as the capacity to impose sanctions for using illegally acquired data in electoral processes should be strengthened in national legal frameworks. For example, for political advertising, there must be more scrutiny of their placement, the identity of their sponsors, and the amount spent on the ads. Journalists can work to investigate the ads and establish criteria for trustworthiness, as Reporters Without Borders and its partners, the European Broadcasting Union (EBU), Agence France Presse (AFP) and the Global Editors Network (GEN), have done through the Journalism Trust Initiative project.

We hope that our contribution helped guide the path towards a more responsible design, roll-out, and use of artificial intelligence technologies in the quest to address disinformation, and we look forward to contributing to the UNESCO’s work in this area moving forward.

You can find more details about Access Now’s position on artificial intelligence and human rights here and on mapping European regulatory proposals on AI here.