In situations of armed conflicts and other crises, people use social media and messaging platforms to document human rights abuses or war crimes, access information, mobilize for action, and crowdsource humanitarian assistance. But governments and other actors leverage these same platforms to spread disinformation and hate speech, incite violence, and attack or surveil activists, journalists, and dissidents. In light of the increasingly important role social media companies play during crises, Access Now and partner organizations have co-authored a Declaration of principles for content and platform governance in times of crisis.
This Declaration, jointly developed by Access Now, ARTICLE 19, Mnemonic, the Center for Democracy and Technology, JustPeace Labs, Digital Security Lab Ukraine, Centre for Democracy and Rule of Law (CEDEM), and the Myanmar Internet Project, sets out guidelines to help platforms protect human rights before, during, and after a crisis.
Social media companies have a responsibility to prevent and mitigate human rights harms stemming from use of their systems. Historically, however, they have responded inadequately and inconsistently, as demonstrated by the failed response to conflict situations in Ethiopia, Syria, Israel/Palestine, and Myanmar. These failures have disproportionately impacted marginalized communities and facilitated serious human rights abuses.
The Declaration is an effort to advance consistent and rights-respecting principles for companies to respond appropriately to crises and meet their obligations and responsibilities under international human rights law. Here is a summary of our key recommendations – read the Declaration in full here.
- Conduct human rights due diligence (HRDD) to address the lifecycle of crises, situations of conflict, and human vulnerabilities:
- Conduct regular ex ante human rights impact assessments (HRIAs), as outlined in the UN Guiding Principles for Business and Human Rights, and take all necessary steps to address and mitigate any identified, adverse human rights impacts.
- Identify and consistently monitor conflict affected and high risk areas. Social media companies should develop a crisis matrix to flag areas for heightened due diligence.
- Build teams with strong local and regional expertise and language skills.
- Conduct human rights and conflict sensitive risk assessments specifically tailored to potential crises’ national specificities or the impacted area’s context.
- Subject their crisis response mechanisms to yearly independent audits, that assess their effectiveness, identify gaps in policy response, enforcement, and resources, and ensure the proper implementation of lessons learned or recommendations.
- Create channels for meaningful and direct engagement with relevant independent stakeholders, including civil society organizations operating in conflict affected and high risk areas. Notably, external experts, relevant stakeholders, and representatives from affected communities should be able to inform platforms’ content moderation and content curation policies, such as:
- Developing strong and continuous cooperation with trusted partners, independent media organizations, individuals, and flaggers, especially if activities are likely to escalate violence and exacerbate tensions.
- Allocating sufficient financial, linguistic and human resources to content moderation efforts.
- Establishing early warning systems and clear escalation systems for emergency situations to help detect imminent harm to individuals’ physical safety.
- Coordinating global, regional, and local offices and staff efforts, to allow timely and coherent decision-making, led by human rights officers well-versed in the dynamic context.
- Develop crisis protocols across all levels and likelihood of risks, designed to prevent and mitigate potential harms:
- Develop and adopt user centric, conflict sensitive measures that adequately mitigate all identified and foreseeable risks of future or ongoing crises. These should specifically focus on protecting the human rights of groups and individuals.
- Develop and test crisis protocols before a crisis breaks out, focusing specifically on the risks for groups, individuals, and their rights.
- Display and regularly update information about the crisis situation provided by relevant international bodies.
- Social media platforms should appoint a dedicated crisis management team for each identified conflict affected and high risk area, with relevant contextual and language skills.
- Conduct ongoing, rapid and conflict sensitive human rights due diligence (HRDD), to identify and mitigate any actual or foreseeable negative impact on human rights.
- Activate meaningful, direct, and concurrent engagement with local and regional civil society organizations and experts when a crisis or armed conflict breaks out, and regularly update stakeholders about the ongoing situation and the company’s consequent measures and actions.
- Take an equitable, fair, and consistent approach to engaging in situations of armed conflict and crises, prioritizing resource allocation based on the salience, scale, and scope of human rights threats and violations, and not on market value or profit share. They must equitably invest in and prioritize non-English speaking countries and areas, by hiring staff and content reviewers with the cultural and linguistic knowledge to effectively enforce companies’ policies in all operating markets, and by creating a standardized crisis response protocol to be enforced as required.
- Provide full transparency on content moderation policy design and enforcement, both human and automated:
- Make any content moderation policy carve outs or extraordinary measures public, clear, specific, predictable, and time limited, and proactively and publicly announce these in the languages spoken by the affected communities.
- Take a context dependent approach to geo-moderation of content in conflict affected and high risk areas, considering that blocking or withholding content within specific countries or areas may not be the most effective approach to content moderation in times or locations of crisis and should only be considered as a last resort.
- Disclose government requests made to social media platforms and their responses, including through voluntary reporting channels,so long as it does not expose employees to serious risk of personal harm and to the extent allowed by legal frameworks.
- Disclose whether any request issued by public authorities has led to tweaks or changes in automated decision making developed to moderate or curate content related to the conflict or crisis.
- Evaluate the operations of context-blind, automated decision making systems to mitigate and address the risks and harms of overenforcement (false positives) or underenforcement (false negatives), such as the risk of arbitrary and discriminatory censorship that disproportionately impacts marginalized or historically oppressed communities.
- When deploying automated content moderation and curation tools for non-English languages, ensure that a human always reviews the outputs.
- Disclose any implemented automated models designed for blanket content de-amplification, “shadow banning,” and deranking content.
- Provide transparency on the criteria that social media platforms use to define, detect, review, and remove so-called terrorist and violent extremist content (TVEC), including content added to the hash-sharing database supported by the Global Internet Forum to Counter Terrorism (GIFTC).
- Preserve content removed by the platform for three years and create a secure mechanism granting access to this archived material to international accountability mechanisms other than national law enforcement, including the International Criminal Court (ICC), the International Court of Justice, and UN-mandated investigative bodies and commissions:
- To ensure accountability and allow judiciary sufficient time to review preserved and archived content, ensure it is located and stored outside of high risk countries and conflict affected areas, in accordance to international standards on privacy and data protection;
- Balance protecting individuals using social media services from exposure to graphic or violent content while providing opportunities for eyewitnesses to document human rights violations and atrocities.
- Create transparent, clear, rights-respecting, and accessible notice and review mechanisms, and provide access to effective remedy:
- Notify users when a moderation decision is made about their content or speech, informing them of what sparked the decision, which rule was broken, how content moderation guidelines were interpreted, what action that will be taken, and how to appeal.
- Provide a clear, transparent, predictable, and accessible appeal mechanism for users to request a review of a content moderation decision.
- Notify users when they are subjected to automated moderation processes, how such mechanisms operate, and how they can request a human review.
- Provide effective, transparent, easy, and timely access to remediation for users affected by a platform’s policies, products, or practices.
- Address human rights risks related to each platform’s business model:
- Social media companies should ensure that surveillance based advertisement, i.e. digital advertising targeted at individual segments, usually through tracking and profiling based on personal data, does not contribute to ongoing or future human rights violations.
- Social media companies should ensure that their monetization programs do not channel incomes to actors associated with sanctioned entities, or to foreign and local actors systematically producing and/or distributing disinformation content.
- Enable account-level safety features for high risk users, including but not limited to: enabling locking of private profiles for external actors; enabling end-to-end encryption in chat and messaging functions; enabling disappearing messages; rolling out notifications for messages in encrypted chats being screenshotted; limiting the search function for followers lists; and providing digital safety and security tips in local languages.
- Implement a gradual transition phase before winding down companies’ operations and notify users of any change in platform functionalities, based on continuous assessment of the conflict’s intensity and life cycle.
- Continue to conduct HRDD to identify, mitigate, and address negative human rights impacts throughout the lifecycle of conflicts and crises, which often escalate and decrease in a cyclical manner. Platforms should also:
- Conduct an audit to review whether their crisis protocols and procedures were adequately followed and implemented, integrating feedback from local stakeholders, including civil society groups, and human rights groups.
- Conduct a public, full, and independent human rights impact assessment, particularly when their content moderation actions and crisis measures have severely impacted the human rights of individuals and communities, exacerbated tensions and conflicts, resulted in or contributed to loss of life and physical harms, or raised a collective grievance among affected individuals and communities.
- Ensure that findings of these audits and assessments result in clearly defined, transparent, measurable, time bound, and public commitments to policy or product change and adjustments.
- Cooperate with national and international judicial and accountability mechanisms and allow access to preserved and archived evidence by national and internationally mandated investigative bodies, the ICC, or the ICJ.
- Grant API and data set access to vetted civil society organizations, journalists, and academic researchers, to find and archive human rights documentation, and to audit and assess the effectiveness and impact of platforms’ responses.
- Conduct quarterly briefings with local and global civil society organizations, on the implementation of crisis measures, their effectiveness, and their impact (or lack thereof). This should also be an opportunity to discuss lessons learned and to outline recommendations for the future.