European Ombudsman surveillance

Misguided “solution” to terrorist content will have bad consequences for our rights

This op-ed is co-authored by  Eliška Pírková (Access Now) and Eva Simon (Liberties) and was originally published by Euractiv.

One year ago, on 15 March, 2019, a terrorist entered one of the mosques in Christchurch, murdering more than fifty people. Beyond the atrocity of such a hate crime, it hit the whole world at least for two reasons. First, it happened in New Zealand, which had been considered a peaceful corner of the world. Second, the massacre was broadcast by the terrorist through Facebook live. Facebook reacted immediately by stopping the broadcast. However, within the first 24 hours, there were 1.5 million attempts to upload some version of the original video, of which 1.2 million were blocked before going online.

In its aftermath, governments around the world, led by New Zealand Prime Minister Jacinda Ardern, called for action to hold social media platforms more responsible for combating illegal online content. However, blocking and removing illegal content as a silver-bullet solution has been on the agenda of the EU and its states for years. The 2019 reform of the Copyright Directive, the 2018 Audiovisual Media Services Directive, and the 2016 Code of Conduct on countering illegal hate speech online are all part of the EU effort to address the dissemination of illegal online content.

The EU and national governments tend to introduce new measures as a response to terrorist attacks by shifting the responsibility to online platforms. They do this by ordering them not only to remove illegal content but also to proactively detect it. This means that private companies have to evaluate materials uploaded and make balanced decisions about legal and illegal speech. This leads to a situation where our free speech depends on non-transparent and non-accountable – and, therefore, non-contestable – decisions of private companies. 

In the EU, one of the measures to address specific terrorism-related content is the proposed Regulation on preventing the dissemination of terrorist content online, put forward by the European Commission in September 2018. The purported objective of the proposed Regulation is “to provide clarity as to the responsibility of hosting service providers in taking all appropriate, reasonable and proportionate actions necessary to ensure the safety of their services and to swiftly and effectively detect and remove terrorist content online.” At the same time, online service providers have to respect freedom of expression and information – a requirement that is challenging even for courts and law enforcement. The draft Regulation is currently going through the legislative process in the European Parliament, where co-legislators negotiating the final text are about to reach a compromise on its wording. The negotiation will end soon, and as it stands now, the Commission and the Council are ready to sacrifice fundamental rights to censor free speech. We believe that social problems can not be solved by content moderation of platforms. 

Based on the recent regulatory measures and demands of policymakers in the EU, online platforms find themselves in a situation where they have to monitor large amounts of online content proactively and remove alleged terrorist material under precise time frames. Furthermore, removal orders are meant to have a cross-border effect. In practice, this means that any Member State in the EU may order online platforms to restrict the content that will apply to the whole Union. Indeed, atrocities such as Christchurch taught us that automated measures are necessary for content moderation to comply with a huge number of demands. Due to their inability to assess the historical, political, or linguistic context of the expression, automated tools commit profound mistakes. Despite documented errors, it is unknown to the public how these systems are being audited and how many or what kinds of erroneous results they generate. The definition of online terrorist content is shaped by the historical and political circumstances of each country. There is no EU consensus on what constitutes a ‘terrorist group.’ Automations are not able to understand religious and cultural differentials in various regions. Not to mention the fact that journalists and human rights groups often need to post photos or videos of terrorist-related content to raise awareness of the terrorism. 

Even before the EU introduced the draft legislation, online platforms established the Global Internet Forum to Counter Terrorism (GIFCT) in 2017 to moderate extremist content. By now, Facebook, Twitter, Microsoft, YouTube, Dropbox, Amazon, LinkedIn, and WhatsApp are all part of GIFCT. They created and continuously update a hash database (https://accessnow.demo.cshp.co/open-letter-to-eu-parliament-on-the-terrorism-database/) with audiovisual terrorist content to help each other tackle and block these contents. However, we know very little about how these technologies work on a daily basis, how terrorist content is defined, and what attribution they use to identify content as terrorist content. Rights organizations have been vocal about the lack of transparency and accountability of these technologies. Thus, it is challenging to assess the effectiveness and accuracy of automation used against terrorist propaganda.

Both the heavy reliance on automated measures and the cross-border nature of removal orders impose a direct challenge to pluralism and the protection of diversity of information, which are cornerstones of democratic societies. If implemented, the use of these measures will ultimately result in pan-European content restrictions on a huge scale, performed mainly by automated tools. Governments must protect the fundamental rights of their citizens. Without appropriate safeguards, such as robust transparency and accountability mechanisms, which are currently lacking in legislative proposals, freedom of expression and the rule of law will be endangered. Acts of terrorism are serious crimes that need appropriate responses from political leaders. It remains puzzling why the simple deletion of alleged terrorist material should provide a satisfactory solution to a deeply rooted societal issue that is extremely complex and strongly dependent on the national context.  

The Christchurch attack was an unprecedented act of terror, broadcast live on the internet. Online platforms responded to states’ demands by heavily relying on automated tools to proactively detect terrorist content. This has resulted in a rapid increase of terrorism-related takedowns, selling to policymakers the picture of an effective and accurate detection system. However, the way automated tools are being deployed and how often they generate false results remain mostly secretive. Their high error rates have been widely documented by civil society organizations as well as academia. The coronavirus pandemic and the flawed solutions companies have suggested to tackle misinformation and content moderation show why we must push for systemic improvements and safeguards for these systems.  Machine learning systems are profoundly context blind. They make gross mistakes that ultimately result in curtailing users’ freedom of expression and opinion with no adequate remedy to challenge the decision of the automation. 

 


Follow our work on the protection of digital rights in the context of the COVID-19 pandemic.