The PACT Act is yet another attempt at reforming platform liability standards in the U.S. In contrast to other proposals on the matter, it is set on the right path with respect to users’ rights. Is that enough to adopt it as it is? What are its key provisions and where does it fall short?
As if the unprecedented pandemic has not produced enough political turmoil, 2020 has also been a tumultuous year for misguided attacks on Section 230 of the Communications Decency Act. From U.S. President Trump’s executive order on the provision to the EARN IT Act to numerous pieces of legislation seeking to undermine the liability shield and First Amendment protections, freedom of expression is undeniably under threat in the United States.
Section 230 says online platforms like Twitter and Facebook, or even small blogs with comment sections, are not legally liable for the content others post on their platform. Just as importantly, Section 230 also allows those same online platforms to make content moderation decisions to enforce their community standards and to protect their users from dangerous content.
The PACT Act, introduced by U.S. Senators Schatz and Thune in June 2020, is the most reasonable proposal we’ve seen to date to reform Section 230. Not quite a breath of fresh air, as the bill raises several unresolved questions, but at least it presents an informed approach worth unpacking.
A step in the right direction: a notice-and-takedown mechanism
Of particular interest is the bill’s notice-and-takedown mechanism, which would require online platforms to remove illegal content when they have been provided a court order assessing the content’s illegality. This procedure is a definite step in the right direction, as it provides an accurate mechanism to alert platforms that illegal content is on their site. Importantly, it helps ensure victims of illegal content have support, access to justice, and access to appeal mechanisms.
So, how does the PACT Act’s notice-and-takedown approach stack up relative to Access Now’s views, particularly those expressed in our content governance recommendations? In the chart below, we unpack select pieces of the legislation.
Amends Section 230 to require large online platforms to remove court-determined illegal content and activity
We agree that courts should determine whether certain speech or conduct violates the law. When it is in violation, victims should be able to go to platforms and seek removal.
Court-deemed illegal content must be removed within 24 hours of notice
The 24 hour window is too short, as it does not provide enough time for judicial review and public scrutiny. Excessively short time frames for content removals lead to unjustified restrictions of freedom of speech. Instead, to evade liability risk, platforms should focus on swift deletion of content. In fact, the Constitutional Council of France declared the 24-hour time frame unconstitutional because such a measure “undermines freedom of expression and communication in a way that is not necessary, adapted nor proportionate.”
After a user submits a complaint over violation of platform’s terms of service, the moderation decision must be made within 14 days without notifying the content provider
Content providers should be able to object to the complaint within the 14-day window and before any action is taken against their content, but the bill only allows for that after the content is removed.
Allows small online platforms to have more flexibility in responding to user complaints, removing illegal content, and acting on illegal activity, based on their size (to qualify, the platform must have: 1. received fewer than one million monthly active users or monthly visitors and, 2. accrued revenue of less than $25 million)
We agree that the largest platforms — commonly referred to as “gatekeepers of fundamental rights” — should bear a higher degree of responsibility toward their users and the general public.
However, determining who qualifies as a “small online platform” is not an easy task. The gradual scaling of responsibilities based on platforms’ market and other forms of dominance should be determined by a carefully studied set of criteria, established by lawmakers, that help to assess a platform’s market power and its ability to shape public discourse.
To summarize, while the PACT Act’s notice-and-takedown mechanism lacks critical safeguards and clearer procedural provisions, this proposal has the potential to serve as a valuable framework with some restructuring and tweaks.
The PACT Act alone won’t solve broader, systemic issues
One thing to consider is that, while the legislation would undeniably provide relief for victims who are vindicated in court, the onus should not always be on the afflicted to seek justice. Victims often need prompt responses, and should not have to go through the hassle and legal trouble to get unlawful content removed in every case. One thorny question is how amendments to Section 230 can fit within a set of reforms to better craft tailored responses to different types of problematic content. Additionally, companies have a responsibility to help prevent harm stemming from the abuse of their platforms, especially when they have been notified.
There are also systemic considerations that affect how illegal or harmful content is shared online that this bill alone won’t solve. Perhaps the biggest underlying problem that needs to be addressed is the business model incentives that have developed in the absence of proper regulatory scrutiny and the lack of comprehensive data protection legislation in the U.S. They drive platforms to amplify inflammatory and harmful content (harassment, election disinformation, COVID-19 conspiracy theories), using opaque recommendation systems, because doing so increases engagement and ad revenue.
The PACT Act won’t fix these problems. But, unlike most other Section 230 proposals, it has potential to move us in the right direction on liability reform.