In many countries across Africa, consensual same-sex acts have been criminalized by colonial-era laws. In addition, the recent rise of a global anti-rights movement has sparked an uptick in LGBTQ+ repression. As a result of crackdowns and enforcement of anti-LGBTQ+ legislations that undermine human rights, LGBTQ+ people are now facing significantly increased risks, online and off.
While the online space has often been a safer alternative to meeting in person for LGBTQ+ people who want to build community, find avenues of expression, advocate for their rights, and access life-saving information, the increasingly threatening environment is further shrinking digital spaces.
To investigate these threats, Access Now conducted research between January 2023 and September 2024, documenting a total of 214 threatening posts on social media platforms TikTok, X (formerly known as Twitter), Facebook, and YouTube, as well as via the private messaging service WhatsApp. (We included WhatsApp in the scope of our investigation due to its popularity and uses in the region, but it has distinct features; see more on the distinction in the cases and recommendations sections below.)
The content in these posts contained a range of threats, such as non-consensual distribution of intimate imagery, incitement to violence, disinformation, dead-naming, entrapment, and doxxing against LGBTQ+ people in Kenya, Uganda, Ghana, Nigeria, and Ethiopia. We and our partners reported these posts to the relevant companies.
Our main finding: out of 214 posts reported via in-app mechanisms and directly to the social media platforms and WhatsApp, only 51 of the terms-of-service and human rights-violating posts were taken down. That’s unacceptable. Both governments and companies are failing to uphold human rights for all, and digital platforms are becoming tools for “rainbow burning” — that is, the practice of stoking hatred and violence against LGBTQ+ people for political gain.
Below, we share a high-level overview of our data; analysis of the political context driving anti-LGBTQ+ attacks; an overview of tech company policies that facilitate these attacks; a summary of the applicable human rights standards; and our policy recommendations for companies and government authorities to meet their human rights obligations and keep LGBTQ+ people safe from harm.
Cases of anti-LGBTQ+ online hate and violence
Following is a summary of documented instances of anti-LGBTQ+ online hate and violence on social media platforms TikTok, X, Facebook, and YouTube, as well as via the messaging service WhatsApp.
Please note: We included WhatsApp in this study due to its popularity in the region, and because WhatsApp channels can be configured so that content reaches a potentially unlimited number of people. However, WhatsApp does not host nor disseminate user-generated content to the public, so it is not a social media platform. In addition, the features of WhatsApp messenger and WhatsApp channels are distinct. WhatsApp messenger is end-to-end encrypted, and is an important tool for online privacy and security; WhatsApp channels, meanwhile, are not encrypted. Our recommendations below for mitigating harm are therefore tailored accordingly; lawmakers and companies must take a different approach to messaging services with features like WhatsApp than they do social media platforms.
Why is harmful content flourishing?
1. Political scapegoating
While anti-LGBTQ+ hatred and violence may thrive online, what is happening online is not the cause of systemic and intersectional discrimination. The threats we are seeing online are a reflection of the discrimination LGBTQ+ people already experience offline, and political homophobia contributes greatly to this toxicity. In countries like Kenya, Ghana, and Uganda, governments are facing public dissent as a result of ever-increasing cost-of-living crises and economic challenges. This has triggered political turmoil and trust deficits between governments and citizens. Rather than address issues such as corruption, neoliberal economic models, and human rights abuses that impact living conditions and undermine public trust, states have instead opted to deploy political homophobia to scapegoat LGBTQ+ people.
As we note above, this is a tactic known as rainbow burning, which is used by conservative politicians and extremist religious leaders to manufacture moral panics that deflect attention from government failure by targeting the LGBTQ+ community. In some cases, it is also deployed during elections to ostracize LGBTQ+ people and smear political opponents. This tactic does not exist in a vacuum. In Africa, political leaders have often positioned the rejection of LGBTQ+ rights as a rejection of Western cultural hegemony. These nationalist discourses are weaponized by religious extremists, who position LGBTQ+ people as an inherent threat to national culture, morality, and heritage. The global anti-rights movement has played a major role in providing these extremists with the resources to exploit existing colonial-era laws and deepen anti-LGBTQ+ discrimination. In addition to funding, anti-rights groups are providing partners in the region with access to networks such as anti-rights summits, WhatsApp groups, and email lists to strategize on anti-LGBTQ+ bills, and coordinate campaigns that spread harmful stigmatizing disinformation about LGBTQ+ people.
2. Social media companies’ business models
Social media platforms’ business models rely heavily on surveillance-based approaches to capture users’ attention and engagement in order to make money. This is commonly referred to as the attention economy. When platforms personalize information through so-called content recommendation systems, they create filter bubbles, where potentially harmful content is often amplified. This is because inflammatory content generates high volumes of engagement, which can earn advertisers more revenue; algorithms in turn often prioritize this content to drive even more engagement. For example, in 2021, a Media Matters report revealed that TikTok’s algorithm promoted anti-LGBTQ+ content on users’ “For You” pages, without them searching for the content. One piece of content that was featured was a video of Russian police arresting LGBTQ+ people, which garnered 9.4 million views.
3. Social media companies’ under-resourcing content moderation in Africa
Social media companies like TikTok, Meta, X, and YouTube overwhelmingly rely on automated content moderation to regulate compliance with their terms of service. Social media companies often claim that natural language processing (NLP) is best suited to detect hate speech and extremist content at scale. However, in the field of natural language processing, African languages such as Amharic, Tigrinya, and Kiswahili are considered “low resource” languages due to low availability of data in these languages on the internet. Low and extremely low resource languages often do not have enough data to train a large language model, compared to English, which is considered to be an extremely high resource language because of the wealth of data available. This means that it’s difficult to flag illegal content in local African languages at scale, which puts LGBTQ+ people at risk. This is what happened recently in Igbo, when Meta’s automated content moderation systems failed to detect violent anti-LGBTQ+ content.
There are also problems with the human side of content moderation. Transparency reporting by social media companies has revealed huge discrepancies across regions, and an opaque approach to resource distribution for languages in Sub-Saharan Africa. For example:
Meta’s April 2024 Regulation (EU) 2022/2065 Digital Services Act Transparency Report for Facebook (DSA transparency report) indicates that the company has a total of 15,000 content reviewers that review content in 80 languages globally. In the EU, English is assigned 98 content moderators. Meta states that there are additional English-speaking moderators that review content in non-EU countries, but they do not share the exact number. | |
TikTok | TikTok’s September 2024 Community Guidelines Enforcement Report indicates that in its global distribution of human moderators, English is assigned 24.5% of moderators, while African languages such as Amharic, Swahili, and Somali share a part of the 16.9% of moderators assigned to content in the 51 languages aggregated as “other.” |
X | X’s October 2024 DSA Transparency Report provides information on only the most common languages spoken in the EU, and indicates that English is assigned 1,117 of its total 1,275 human content moderators. The company’s Global Transparency Report has no data at all on language distribution. |
YouTube | YouTube’s DSA Transparency Report for August 2024 indicates that in the EU, English is assigned 3,216 human content moderators, including engineers and Trust & Safety product managers. However, the latest Community Guidelines Enforcement transparency reporting has no data on language distribution in Sub-Saharan Africa. |
Under-resourcing and reportedly toxic work conditions make it difficult for moderators to adhere to human rights standards and this impacts how people are treated and interact with the platform. Recent lawsuits filed against Meta have asserted that African content moderators are forced to work in environments that heighten psychological stress, through unlimited exposure to traumatic content, inadequate access to psychosocial support, and intense surveillance and pressure to review high volumes of content with high level of accuracy, within a limited time. Mozilla Foundation has reported that TikTok content moderators in Sub-Saharan Africa, contracted by Majorel, review content in languages and contexts they are not familiar with, and so potentially violating content is not flagged. Their action or inaction could have far-reaching impacts on people’s lives.
What is the human rights standard?
The content that we and our partners flagged and reported to the social media companies meets the threshold for constituting incitement to discrimination, hostility, and violence, which is prohibited by Article 20 (2) of the International Covenant on Civil and Political Rights — an agreement these companies have committed to upholding. They have also committed to the UN Guiding Principles for Business and Human Rights (UNGPs), which require that business enterprises respect human rights in their operations, and ensure they have in place policies and processes to identify and to mitigate any potential adverse human rights impacts resulting from these operations. These companies must take these cases seriously, particularly in the case of political homophobia, where such inflammatory statements are made when there is a risk of threats inciting physical violence.
recommendations
Following are our recommendations for the social media platforms named in this investigation, as well as the messaging service WhatsApp, to meet their human rights obligations.
As we note above in the cases section, WhatsApp messenger is an encrypted messaging service, not a social media platform, and our recommendations below are tailored accordingly. To protect people’s rights, lawmakers and companies must take an approach to mitigating harm that is suitable for the platform or service in question.
In addition to recommendations for the relevant companies, we offer high-level guidance for governments and international organizations seeking to enact rights-respecting platform regulations.
➡ social media Platforms should
- Invest meaningfully, ethically, and transparently in human content moderation in Africa. Over-reliance on automated content-moderation systems is not sufficient to assess the language and context of content. Not only should the social media platforms provide more resources for human content moderators in Africa, they should also ensure that the moderators tasked with reviewing content and language have sufficient knowledge of local contexts.
- Report on their content-moderation efforts in Africa the same way that the social media platforms already do for EU member states in EU languages, as prescribed by the Digital Services Act’s transparency reporting obligations, particularly the provisions in Article 42. While transparency reporting obligations should be tailored to the context and realities of specific nations, DSA transparency reports show that the social media platforms already have the mechanisms and processes in place to collect and report on content moderation efforts in detail. They should extend this reporting to countries in Africa and other non-EU regions.
- Refrain from making closed-door commitments with government authorities in the region that could enable government overreach, such as TikTok’s reported agreement with the President of Kenya to comply with arbitrary moderation requirements. There must be more transparency on informal cooperation between governments and companies.
- Commit to carrying out and publishing regular, periodic Human Rights Impact Assessments (HRIAs) tailored to the specific markets they operate in, in alignment with the UN Guiding Principles (UNGPs).
- Establish meaningful stakeholder engagement mechanisms and regularly consult and brief trusted partners on the methodologies and risk assessment processes. Allow trusted partners to issue benchmarks for meaningful HRIAs they commit to comply with. Once published, HRIAs should be subject to independent review by external stakeholders with the required expertise.
- Improve their data privacy standards in line with industry and human rights best practices to maintain the integrity of the user data that is entrusted to them, including by making their policies public, readily available, and easy to understand, and by including third party data-sharing requests in their human rights reporting.
- Simplify and make terms of use, policies, and community guidelines more accessible by localizing, using simple language, and condensing the text.
➡ whatsapp should
- Ensure that privacy protections such as end-to-end encryption are strictly safeguarded, as they are critical for the protection of human rights, particularly for LGBTQ+ people in countries where homosexuality is criminalized.
- Ensure that people who use the service are made aware of advanced privacy measures and blocking and reporting tools, and that these tools are backed up by robust trust and safety teams that enable meaningful remedies and harm reduction for victims of tech-facilitated abuse.
➡ governments and international organizations should
- Adopt a human rights-based approach to regulating platform accountability and content governance. Restrictions to the right of freedom of expression must be lawful, necessary, and proportionate.
- Engage transparently and meaningfully with stakeholders – including civil society, companies, and the public – on issues of content moderation and freedom of expression to facilitate informed framework development, knowledge sharing, and capacity building.
- Develop formalized and transparent mechanisms of consultation with relevant LGBTQ+ and other community groups, especially during the drafting of relevant policies and laws directly affecting these communities.
- Direct social media companies to comply with international human rights law, or where relevant, constitutional and legal frameworks protecting the rights of LGBTQ+ communities.