Update, June 2019: We’re addressing these issues at RightsCon Tunis. Here’s where you can find sessions on countering online harassment and hate speech and violent extremism.
…
Today, a group of governments led by New Zealand, and major tech companies including Facebook, Microsoft, Google, and Twitter, signed and published the “Christchurch Call,” a voluntary pledge with commitments aimed at eliminating terrorist and violent extremist content online.
The Christchurch Call follows the tragic shootings in Christchurch, New Zealand in early March. In response to this and other tragic attacks where the attackers have used online platforms or services to publish and propagate abhorrent violent material, governments around the world are scrambling to enact new, far-reaching laws or policies in an effort to clamp down on speech posted or shared on social media platforms. Addressing the root causes and means of perpetuating violence against communities is a legitimate and critically important public policy objective. However, a rushed response that targets online speech but ignores the systemic issues behind it would put fundamental rights at risk while also failing to prevent attacks.
Below is our response to the Call, including what we consider valuable, what needs work, and our recommendations for continued engagement on these critical issues for our societies.
The process for the Christchurch Call: flawed from the beginning
For at least a decade, the consensus has been that to address the key challenges of our time and keep the internet free, open, and secure, we must use an open, inclusive, multi-stakeholder approach. Increasingly, however, initiatives for collective action that would benefit from multiple, diverse perspectives disregard this model.
The Christchurch Call is yet another unfortunate example. The governments of France and New Zealand, which led the drafting of the text and brought together the other actors, did not create the text through an inclusive public consultation. Instead, those leading the conversation developed the recommendations for key governments and tech companies behind closed doors.
In a welcome move to increase transparency, New Zealand Prime Minister Jacinda Ardern, reportedly over the objections of the French government, convened a meeting in Paris for civil society called Voices for Action, in which Access Now participated. This meeting took place only a day before the Christchurch Call was released. The objective was for civil society stakeholders to provide input into how the Call should be implemented, not to seek feedback on the substance. Those attending were given the text of the Call just two working days before the meeting took place.
Engaging stakeholders is important, but this meeting did not address the problem of the lack of consultation on the substance of the Call, and the short time frame and lack of information to prepare hindered the capacity for providing input of value. Additionally, it kept civil society separate from the other key stakeholders — namely, companies and governments — drastically limiting the ability to meaningfully engage. This was all despite the fact that civil society groups are referenced several times in the document and are expected to help with implementation.
What the Christchurch Call gets right
In comparison to other initiatives aimed at limiting or stopping the dissemination of “terrorist” or “extremist” content online, such as the E.U.’s proposed terrorist content regulation or the law criminalizing social media executives in Australia — both of which dangerously ignore the potential impact on human rights — the Christchurch Call is a more measured and considered set of measures.
The Christchurch Call explicitly cites human rights as both a guiding objective of government action and a limiting factor. It also underlines transparency obligations for companies and governments. These are fundamental pillars of any public or private attempt to regulate the online space and it is therefore very welcome to see that they are explicitly recognized.
When it comes to implementing recommendations, the Call invites governments and companies to engage with and discuss further steps with other actors, notably with civil society. (As we explain above, however, this invitation can only be made effective through meaningful engagement.)
What the Christchurch Call could have improved
Establishing a process for identifying and removing objectionable speech that could facilitate violence, discrimination, or other harm is a complex task and has numerous possible failure points. The definitions must be narrow and specific, the process must be transparent and fair, and those who are required to act must be clearly identified and provided incentives not to go too far in removing legitimate discourse or commentary.
This applies to the Christchurch Call, which aims to address “terrorist and violent extremist content,” a concept that can vary between countries and in some cases can be used arbitrarily to harm human rights. For example, journalists in Jordan have been prosecuted under the country’s terror laws for reporting that is critical of the government.
The Call also urges action by “online service providers” without defining what is meant by that. Under this broad umbrella term, we could see numerous unrelated internet services, such as infrastructure providers like DNS operators or telecommunications carriers, affected by rules not intended for them. To resolve this, the Call should have asked governments to ensure the use of clear, unambiguous, limited, and specific definitions in any related policy or regulations.
The Call also relies heavily on upload filters as a technical tool to prevent the dissemination of violent content. While in some cases it may be clear that content is not appropriate, in most the interpretation requires meaningful human involvement and evaluation of context. This makes the analysis and determination of violent content incompatible with automatic filtering. The use of automated systems in content moderation must be carefully evaluated before roll out and deployed only in a limited set of circumstances, as this use has serious implications not only for freedom of expression but also other fundamental rights, such as privacy.
Last but not least, the Call focuses primarily on putting the responsibility for identifying and removing violent extremist content on internet companies. These companies do have a key role to play and must live up to their responsibilities to society as well as their duty to respect human rights. However, governments should never outsource the regulation of speech to private entities, as doing so removes principles of due process and government accountability. Users deserve a clear, predictable, and human rights-compatible framework for the protection of their freedom of expression online. Governments, together with companies operating within their jurisdictions, are responsible for providing the conditions for the enjoyment of human rights.
Facebook’s first reaction: a change to its Live policy
Ahead of the Christchurch Call, Facebook announced changes to its Facebook Live policy. These changes would entail a one-strike policy for users who violate Facebook’s “most serious policies” from the company’s community standards and terms of service. Those users could get banned from the platform for up to 30 days. Facebook’s announcement does not specify which policies are the most serious and could trigger this response.
It is not clear why Facebook has chosen to make this its first public move in the context of the Call to address the issue of terrorist content. The internal rules that will be covered by this measure are also not specified. Yet transparency and clarity are essential requirements of any such measures. Users cannot be expected to infer what types of behavior could lead to such a ban without clear and easily accessible guidance.
The connection between Facebook’s announced measures and the tragic events at Christchurch are not entirely clear. As Facebook itself explained at the time, there were only 200 people watching Live during the shooting, while what made it difficult to contain the dissemination of the video were the 1.5 million edited copies of the video that kept being uploaded well after the attack was over.
Moreover, the response doesn’t get to the heart of what could actually make a difference: changes in Facebook’s business model, which currently entails collecting massive amounts of data and promoting engagement over other priorities or values. The issue of how platforms make decisions regarding content and advertising for the purpose of monetization is not addressed by Facebook’s announcement or in most government-led content moderation initiatives.
Access Now’s recommendations on content moderation
Earlier this week, we published a discussion paper on human rights principles for content moderation at scale. The paper, which includes an analysis of Facebook’s proposed independent oversight board, lays out principles and recommendations for engaging in content moderation practices in a manner that is compatible with international human rights standards.
Human rights are universal rules with which all governments and companies should abide. We hope our paper can assist the companies signing the Call and we expect to keep engaging with them and with governments in an open process to determine how best to put human rights principles into practice.
Despite the flawed process, civil society unites their voices
Most of the issues we raise in this post have been voiced in a joint statement by civil society that we endorse, and the statement raises additional important points on the Call and its merits.
The statement, facilitated by the civil society organization InternetNZ, is an effort to bring civil society voices to the Call in spite of the flawed process for creating it. We thank Internet NZ for its work on this and we commend the government of New Zealand for its willingness to listen, and to keep listening, to these important voices.
We look forward to meaningful collaboration and civil society participation in the future, within the scope of the Call or in other government and company-led initiatives.
Conclusion: useful engagement and effective solutions will require more work
Some of the recommendations in the Christchurch Call can be useful if we ensure that the important debates of our time — such as on the systemic issues of hate, literacy, and access to information — take place in a constructive, respectful, and participatory manner. We will need careful, constructive dialogue moving forward to develop effective, evidence-based policies for preventing violence. In addition, to protect a free, open, and secure internet, we will need all signatories of the Call to engage in meaningful participation with a genuine commitment to discussing issues and concepts in depth and with sufficient time for all stakeholders to provide input.
Those organizing other fora, such as the meeting of the G7, should not repeat the mistakes of the Call, making sure to consult stakeholders, including civil society organizations, well in advance.