This week four major tech companies —- Microsoft, YouTube, Facebook, and Twitter — announced a partnership effort aimed at reducing the amount of “terrorist” content on their platforms. The companies committed to sharing “hashes,” or digital signatures, that will be used to flag images and videos online, to help with removing the content from each of these platforms.
Taking down such content is risky for online expression, and countering violent extremism (CVE) programs must be implemented with great care and precision. We appreciate companies’ concerns and call for consultation regarding important societal and security issues, but they cannot and should not address these problems through private enforcement schemes that fail to meet human rights standards, including for transparency and access to remedy. Doing so would exacerbate existing problems with CVE programs. Embarking on a collaborative program means that all four companies must commit to operating with greater legal clarity and improved transparency regarding how and why they remove content, including spelling out what happens when they overstep.
There are (at least) three difficult issues here. First, using a hashing method may be a poor fit for dealing with the complexities involved in determining whether content is “extremist” or “terrorism”-related. That’s because the context is important. Companies have used a similar approach to identify and reduce exposure to child sexual exploitation content — content that is illegal to possess or post under any circumstances, across nations. However, when you’re dealing with content that is allegedly “terrorism”-related, where context is critical for determining meaning, the hashing approach could easily lead to removing content that should not be removed. For instance, a reporter, blogger, or citizen journalist might use an image that had been hashed and flagged for potential removal between the companies. Under the proposal, when an image is flagged for removal, it would then undergo human review under company standards.
It’s not clear how companies would apply these standards in practice. As it stands, each company has community standards that are imprecise, with varying definitions for “extremist” or “terrorism”-related content that exist outside of clear legal mandates. It remains impossible to evaluate how those community standards stack up compared to human rights standards without more transparency regarding how, when, and why content is removed.
Finally, it’s disheartening to see these platforms eagerly collaborate when it comes to an initiative for restricting expression, while insisting that they cannot collaborate in the same way to protect users’ rights. The third pillar of the United Nations ‘Ruggie’ Framework on Business & Human Rights says that companies should jointly provide people with access to remedy for business-related harms. Yet the average user today has no meaningful understanding of how companies enforce terms of service; how to contact companies for appeal when their accounts are suspended or their content gets taken down; or how to prevent their data from being spread and sold across the internet. Companies are claiming that they cannot engage on remedy because they have billions of users, while clearly innovating to “scale up” their capacity to restrict content.
The big picture is disturbing. Under heavy pressure from governments to take action, companies are coming closer to creating what amounts to a private body of law that they alone control. It’s not clear how the collaborative, cross-company CVE program will be administered, and whether/how companies plan to offer people or groups unhappy about their content being hashed and targeted for proscription the appropriate, rights-respecting mechanisms for accountability, transparency, and redress.
Further, this sort of well-meaning joint effort may only scrape out the bottom of the content barrel. It’s creating a mechanism for joint action by some — but notably not all — of the world’s largest internet platforms to remove “terrorism”-related content. It’s possible that the program could be counter-productive, inflaming rather than discouraging extremism. Yet this mechanism, once created, could become a black box difficult to see into, understand, or push back against. Blacklists grow and missions creep. With the often-vague concept of “terrorist” content at its heart, and free expression on the table, this program could bleed in many directions.
Access Now recently published a policy guide on how to evaluate proposals like this to counter “violent extremism” online. The guide maps out a set of high-level principles and provides specific recommendations based on those principles. We are concerned that the CVE program announced by Microsoft, YouTube, Facebook, and Twitter could undermine users’ rights and weaken the human rights law norms that apply in this area. That’s why we created the guide: to help companies and other stakeholders fortify those rights-respecting norms.
As we have noted before, if we do not protect the freedom and openness of our internet, we risk destroying users’ trust, globally. This would play right into the hands of those who wish to inflame conflict and feed extremism. We ask that companies work closely with civil society stakeholders globally to preserve trust and defend rights.