On November 2, an independent commission set up by Reporters Without Borders published a new declaration on issues relevant for human rights in the digital era. The “International Declaration on Information and Democracy: principles for the global information and communication space“ addresses difficult and pressing issues such as misinformation, privacy, and the role of tech intermediaries in ensuring freedom of expression. The declaration, endorsed by a number of important figures in journalism and human rights, has valuable references to freedom of the press and the protection of journalists, and it calls for a better technological ecosystem for information exchange. Today at the Paris Peace Forum, 12 countries launched a political process aimed at providing democratic guarantees for news and information and freedom of opinion – an initiative based on the declaration. While we share that goal, our analysis offers a word of caution with regard to the recommendations on the role of internet information intermediaries. We explain why this part of the declaration may be problematic for the freedom of expression online if poorly implemented or interpreted by decision-makers.
A necessary call for better conditions for journalism
The declaration takes stock of the current challenges for the free press, which are shared by traditional and digital journalism. It reinforces the key role that journalists play in democratic societies, and makes a call to increase their safety. From our point of view, this clearly includes strengthening digital security, a challenge that journalists face in light of the illegal eavesdropping by both governments and private actors. Journalists need to be able to rely on technology that works for them and protects their sources. That’s why we view the protection of strong encryption as fundamental for the work of journalists, and we commend the declaration’s call for privacy for those participating in the public debate.
Privacy facilitates the exercise of the freedom of expression, which comprises the right to impart and receive information. Both technology and the press play an important role in facilitating our access to information in the public interest. The declaration recognizes this and stresses the social function of the press. We add that our ability to access the internet in times of political and social unrest is also essential in fulfilment of that role. Therefore, states should abstain from ordering internet shutdowns or blocking applications. Despite growing public awareness of such network interference, this dangerous trend is nevertheless escalating, as we recently indicated in a joint report to the United Nations Human Rights Council. We also call for increased attention to the wave of repressive legislation that is targeting online expression and putting journalists’ work and lives at risk.
Another laudable inclusion in the declaration is its call for further transparency. This includes transparency as a means of improving the quality of information but also as a way to understand more about how the content curation algorithms in digital platforms work.
Cautions and considerations regarding free expression
The declaration raises concerns about issues including liability for content dissemination, bias in digital platforms, and the proliferation of misinformation on the internet. We acknowledge and share those concerns. However, we worry that some parts of the declaration may be misinterpreted by decision-makers to adopt solutions that, without further analysis, could harm free expression.
Liability for expression — some important distinctions
The declaration makes note of liability for those participating in the public debate, particularly for content they disseminate or “help to disseminate.” There are critically important distinctions to be made in this area in order to avoid ill-informed implementations of this idea. First, there are technical intermediaries on the internet that help disseminate content, but, as a general rule, should not be held liable for third-party expressions. That is the case with regard to hosting and domain name providers, for instance, which do not participate in the curation or prioritization of content and merely provide technical infrastructure to web pages and apps to function. Legal sanctions for these intermediaries for the content they host would represent a disproportionate measure at odds with internationally recognized human rights principles.
When we consider social media platforms, there is no clear solution and any efforts in the area must be evidence-based. When platforms use algorithmic curation of content, it implies making a decision about the dissemination of information, but that decision is typically informed not only by the creators of the algorithm but also by the conduct of users. Further, design choices and decision-making for curation that rewards user engagement may create an incentive for companies that use these platforms for advertising to track and surveil users, which implicates other rights. The bottom line is that we need more information to understand how content consumption and dissemination really works. Before we engage in any public policy consideration of liability for digital intermediaries on content, which raises clear and significant risks for free expression, we must have clarity on the extent to which different actors in the information ecosystem exert influence over content creation and dissemination.
Neutrality — what kind?
The declaration also calls for “political, religious, and ideological neutrality.” It states that platforms should be neutral on those issues when “structuring the information space.” While we understand the concerns regarding possible bias in the curation of content, public policy actions based on the call for neutrality in the ”structuring” of the information space may leave room for abuse if important questions are not answered first. There is no doubt that arbitrary discrimination is an obstacle for the exercise of free expression. But, what could neutrality mean in the digital information context? Would that mean equal treatment for different kinds of information that are fed into a curation algorithm? Or would that mean striving for an ideal of a balanced output in search results or social media feeds? The definition of neutrality, as we can see, can be tricky. It implies a neutrality of information input, treatment, and output that is hard to achieve across diverse information systems. Take a search engine, for instance, and compare it with a social media service. A search engine indexes a broader range of information not directly influenced by the user, but its processing and presentation of search results is indirectly influenced by user behavior. That’s how search services offer personalized results. Should a search engine’s neutrality efforts be focused on non-discriminatory crawling of sources? Or should it be non-discriminatory in the processing and presentation of results? How is neutrality in a search engine compatible with user personalization? If this is a matter of degree, how much personalization or neutrality is enough, and who gets to decide that?
The question of “neutrality” for social media platforms is perhaps even more complicated. Users themselves input content, and users tend to follow the people and pages that they like. The choices they make reflect their own ideas, religious beliefs, and more. Should companies or governments intervene in the choices of users? To what degree? Should some content or user posts be sponsored to promote “neutrality” or diversity of opinion? Who makes that decision?
The information ecosystem today has characteristics that appear to be promoting polarization and reactivity, which in turn can have a negative effect in democracy. However, confronting this challenge will take much more than asking companies for “neutrality.” It requires addressing business models, information literacy, design for user choice, and social and educational problems. Consider the reports about the use of WhatsApp, a closed communication channel, to spread misinformation in Brazil before the recent elections. This could be considered a “neutral” channel since there is no algorithmic prioritization of the messages that run through the platform. Yet in the broader context of the information ecosystem in Brazil, including the dominance of this channel because WhatsApp is often “zero-rated” and therefore free to use, its use may also have increased the challenges for information diversity and fact-checking.
We agree with the declaration’s emphasis on the idea that with the greater influence, there is more responsibility and a corresponding need for increased transparency. However, given the considerations outlined above, assigning editorial responsibility or possible liability may not be an appropriate answer in all cases. Platforms should, instead, provide users with effective tools to exert the maximum amount of control over their information experience. By default. This could include options such as giving users the capacity to turn off prioritization in a news feed, or adjust it with their own preferences, for example, or to disable tracking and behavioral advertisements. This might represent the type of “neutrality” for platforms that would benefit users.
“Reliable” information — a difficult quest in the digital space
Finally, the declaration’s call for platforms to favor reliable information also raises complex issues for free expression. The declaration recommends as tools in this endeavor transparency, editorial independence, verification methods, and journalistic ethics. In addition to the challenges we explore above related to editorial responsibility, there are also challenges when it comes to a platform’s use of verification methods and journalistic ethics. The expression of opinion is protected as a fundamental human right, and opinion pieces are not necessarily “verifiable.” Speculation, theorizing, satire, and opinion present challenges to fact checking, online or off. It is also vital that neither states nor companies define journalistic ethics. On a number of social media platforms, one’s news feed contains a mix of personal opinion, news items, editorials, and advertising. Although journalistic ethics could play a role in the design of a news feed or help inform the development of a content curation algorithm, independent and human rights based human intervention is essential to mitigate the spread of misinformation on communication platforms.
Conclusion: in assigning responsibility, take care not to deputize platforms as guardians of truth or neutrality
All the issues we have explored are difficult, and a thorough analysis of all their implications would exceed the bounds of this post. The challenges the declaration seeks to address are only starting to be adequately researched and there is a need for more information from internet platforms.
However, we can start with one initial recommendation to those seeking to apply the content of the declaration to public policy decisions: avoid deputizing social media companies or any internet intermediary as a guardian of the truth or neutrality, as this risks consequences for free expression and other protected human rights. Social media platforms, and the dominant players in particular, must take heed of their responsibility to consider the human rights impacts of their products. If by encouraging them to take more responsibility, we also make them the arbiters of truth, however, we put those same rights at risk. And we transfer even more power from the people to dominant platforms.
Today, people access, create, share, comment on, and react to information in complex ways. In the challenges that this poses for our democracies, we must find solutions that empower us to deal with information in a constructive, but also fundamentally free way. This means putting users in control, by giving them more options for how they find, consume, and share content free from manipulation. It also means providing more transparency, especially with regard to ads, including political advertising. Finally, it means looking at the bigger picture and developing business models that do not reward poor quality information that increases “engagement” by playing on basic human instincts of fear, alarm and discord.