Skip to content Skip to footer

Rebecca Tushnet: “Content Moderation and Disclosure: The Cautionary US Experience”

***

The Amsterdam Law & Technology Institute’s team is inviting scholars to publish guest articles in the ALTI Forum. Here is the latest contribution authored by Rebecca Tushnet (Harvard School of Law).

***

In the last year, the US states of Florida and Texas have attempted to require large online platforms to be “viewpoint neutral” in content moderation. The purpose was clearly set forth by Florida governor Ron DeSantis in his signing statement: to combat the “biased silencing” of “our freedom of speech as conservatives . . . by the ‘big tech’ oligarchs in Silicon Valley.”

The laws also purport to require extensive disclosures about content policies, including highly detailed individualized explanations when the platforms remove or otherwise downgrade the visibility of user-supplied content. (Florida initially exempted Disney from its law’s requirements, but changed its law in retaliation for Disney’s mild refusal to endorse Florida’s anti-LGBTQ measures after they were enacted, which says something about the larger purpose of these content moderation measures.) Presently, the enforcement of both laws is mostly enjoined, though this is a precarious situation and the US Supreme Court is plainly going to have the last word.

One reason the Supreme Court likely stepped in already—enjoining enforcement of the Texas law after the court of appeals ordered it to go into force—is that the federal court of appeals in Texas did so without any explanation. That is, the court of appeals told platforms that they owed users detailed explanations for governance decisions, without themselves explaining why that did not violate the free speech rights of the platforms as publishers to decide what kinds of content they were willing to host. This is no mere irony. This raw exercise of power to burden perceived political enemies highlights the ways in which some governments are attempting to be the only source of legitimate decisions about the content of public discourse. And it highlights the dangers of giving governments that power.

Large platforms have certainly merited much of the criticism they’ve received. But American constitutionalism has long been wary of letting the government pick winners and losers in speech environments, because of historically well-justified fears that the government will abuse its power to suppress political (and social, cultural, and artistic) speech that conflicts with the government’s preferred policies. There are growing attempts in the US to suppress public discussion of the history of race relations, with “memory laws” dictating that the history of US slavery may not be presented as anything but a triumph over a small aberration. And, as referenced above, some US states are attempting to redefine acknowledging the existence of LGBTQ people as inherently dangerous to children and teenagers, which also requires suppression of speech; now Texas is also threatening to go after drag shows. The Florida and Texas platform regulation laws were explicitly presented by their enactors as punishment of large platforms for not being friendly enough to conservative ideologies, and Texas is likewise pursuing actions against Twitter to support the interests of Elon Musk, a newly-announced Republican—following up on its ongoing harassment of Twitter for allegedly discriminating against conservative users in violation of its own terms of service.

The US is a nation in danger of losing its democratic institutions, where a violent insurrection recently attempted to prevent the peaceful transition of power, and where both federal and state governments have an unfortunate history of deeming their enemies to be violent revolutionaries on thin evidence. In this context, it is not enough to point to the harms that powerful online platforms can definitely do. It is necessary to look at the ways in which particular speech regulations will be enforced. The strong evidence is that the kinds of regulations enacted and proposed by many legislators will be used to harm the powerless and protect the powerful. Under the Texas and Florida laws, for example, platforms could not remove speech calling for the execution of LGBTQ people unless they also removed speech reaffirming the value of LGBTQ lives (the laws purport to exempt removals of “incitement,” but this concept in US law requires a specific and imminent threat, not a general call for death to enemies). They could not remove Holocaust denial unless they also removed Holocaust memorials. It is a well-known adage that a lie can make it around the world before the truth can finish putting on its shoes—but these laws are designed to ensure that private institutions cannot voluntarily put up barriers to those lies.

Separately, these laws exploit the difficulties caused by scale in order to punish platforms. When there are a billion posts every day to moderate, a system that is 99.9999% perfect at detecting bad content—however defined—will make 1000 errors. An individualist approach makes each error a potential source of thousands of dollars per day in damages. Given the inevitability of mistakes at scale, users can easily point to inconsistencies to claim that they were discriminated against. As Evelyn Douek has convincingly written, platform regulation that focuses on individual content decisions will never be able to grapple with what platforms actually do—including the harm they can do. Regulation that meant to address actual harm would focus on design decisions.

But this leads to a problem that is much harder for the current American free speech framework to address: What should be done about transparency requirements, which can shed light on whether regulatory design intervention is needed in a variety of ways, including protecting consumers and protecting children?

Disclosure requirements often seem like a simple political compromise: Banning some unpleasant activity directly may require a head-on conflict with people who like that activity, along with the potential existence of a human right to engage in that activity, along with difficult definitional questions in writing a ban. Instead, disclosure mandates require private actors to disclose their relevant involvement. The hope is that the free market will then take care of the problem as many consumers avoid the private parties who disclose their unpleasantness and pressure them to do less of it, while some private parties will avoid engaging in the unpleasant activity so they don’t need to disclose. (In the EU, platform regulators seem to have more freedom to delegate the tough definitional work to private platforms than they do in the US, so US law’s demand for definitional precision when the law bans conduct also increases the relative attractiveness of disclosure in the US.) But although mandatory disclosure is politically cheap, it may not be very good at decreasing the amount of unpleasant activity, while it may be very good at imposing other costs, including signaling that disclosers are political outsiders.

As part of the rise of free speech absolutism, US law is increasingly hostile to mandatory disclosures—except perhaps with respect to “commercial speech,” loosely defined as speech that proposes a commercial transaction, which can include general statements about a commercial seller’s products and operations. Platform content moderation standards are plausibly “commercial speech” about their own services, and thus disclosure of those standards might be more acceptable than other forms of regulation. In brief, mandatory disclosures are acceptable for commercial speech when the required information is factual (and “uncontroversial,” though whether that should be a separate part of the test is dubious, especially given that everything is controversial in American politics at this point) and when the disclosures are not unduly burdensome on the commercial speaker.

Both the Florida and Texas platform laws attempt to use onerous disclosure requirements to expose individual platform decisions to greater scrutiny.  Under the Florida law, for example, a covered platform must “publish the standards, including detailed definitions, it uses or has used for determining how to censor, deplatform, and shadow ban.” “Censor” means any action taken to “delete,” “edit,” or “inhibit the publication of” content, and also any effort to “post an addendum to any content or material.” To “deplatform” is to ban or temporarily ban a user for more than 14 days, while to “shadow ban” is to “limit or eliminate the exposure of a user or content or material posted by a user to other users of [a] . . . platform.” In addition, a platform must inform its users “about any changes to” its “rules, terms, and agreements before implementing the changes.” It must also provide a user with view counts for their posts on request. Platforms that “willfully provide[] free advertising for a candidate must inform the candidate of such in-kind contribution.” And, before a social-media platform deplatforms, censors, or shadow-bans any user, it must provide the user with a detailed notice in writing, which must include both a “thorough rationale explaining the reason” for the “censor[ship]” and a “precise and thorough explanation of how the social media platform became aware” of the content that triggered its decision, unless the censored material is obscene.

Many things could be said about these requirements, including the way they would provide a roadmap for abusive posters to work around content rules. In some ways, the most revealing provision is the one regarding “willfully” providing “free advertising” for a political candidate: The platforms, quite reasonably, think they don’t ever do this, so they are unlikely to provide any notices. But the conviction that platforms are helping Republicans’ perceived enemies generated a new notice requirement written into law. When a platform fails to provide any notices, a Florida enforcer can now investigate whether they are complying with the law and look in detail at how candidate posts are disseminated on the platform, and potentially allege that the operation of a particular algorithm amounts to the provision of “free advertising.”

The court of appeals dealing with the Florida law ruled that the plaintiffs had only shown that one of the disclosure mandates, the one requiring a “precise and thorough explanation” for any adverse action against a user, was unconstitutional. Each provision served the state’s legitimate interest in “ensuring that users—consumers who engage in commercial transactions with platforms by providing them with a user and data for advertising in exchange for access to a forum—are fully informed about the terms of that transaction and aren’t misled about platforms’ content-moderation policies.” However, the plaintiffs showed that the mandate to explain adverse actions with so much detail was unduly burdensome, given the billions of posts moderated by large platforms. The court pointed both to the significant resulting implementation costs and the risks of huge liability for mistakes—up to $100,000 in statutory damages per claim, based on whether a court agreed with the platform that its explanation satisfied the vague terms “thorough” and “precise.” By contrast, the plaintiffs didn’t show that the other disclosure mandates were unduly burdensome, though they might well do so at a later stage of the litigation. (Justice Alito’s dissent from the stay of the Texas platform censorship law likewise singled out disclosure requirements for a greater potential of being upheld.)

The same core vagueness objection could easily be applied to requirements that platforms publish their “standards, including detailed definitions.” Does that include publishing specific examples (creating the roadmap for abusers’ problem)? Is it legitimate for moderators to reason from those examples to other, arguably similar posts, or does that constitute a “change” to the rules requiring users to be informed in advance? Announcing changes before implementing them sounds good, but understates the challenge of billions of users inventing new ways to do bad things. Effective prohibitions must be general enough to be responsive to abusive innovation, but that strips them of detail, and also inevitably creates debatable cases—instances that might or might not violate a policy. Even the seemingly specific “don’t compare members of an identifiable group to animals” creates debatable instances.

Perhaps most saliently, the Florida law allows disgruntled users to go to court arguing that they weren’t given the necessary “detailed definition” because their posts were moderated using standards that don’t precisely specify their exact words or images. It also allows the Florida attorney general to demand extensive evidence from platforms’ internal operations on the theory that platforms are hiding their actual standards. Given these dynamics, Eric Goldman has argued that editorial transparency requirements should generally be held unconstitutional under US law. They are, he argues, plainly designed to deter platforms from making choices about what content to allow, and that chilling effect should condemn them.

Yet the appeal of transparency obligations is substantial. It does seem a matter of public interest to know, for example, whether Facebook’s prohibitions on nudity exempt breastfeeding mothers and breast cancer public service announcements. Transparency interacts with scale: To my knowledge, no one has suggested that the New York Times or other dominant newspapers should disclose detailed standards for when they print editorials or news content—“all the news that’s fit to print” is clearly puffery, and even the wag’s “all the news that fits, we print” is obviously insufficient to explain what ends up in the newspaper. With a newspaper, it would generally be credible to say “after basic fact checking and defamation review, we select from a pool of acceptable news and print what we think should be printed according to our sole discretion.” But at the scale of a platform hosting millions or billions of posts, more detailed rules will inevitably develop in order to guide the decisions of low-level moderators, who themselves will seek out rules to make their work manageable. So there is actually something of interest to disclose with platforms, compared to newspapers operating on a much smaller scale.

The well-justified assumption of these disclosure laws is that public and advertiser pressure will ensure that some content moderation remains. In theory, one could satisfy the law by having no content moderation at all other than that required by law, but a service with no anti-spam content moderation would quickly become overrun, and most services benefit from anti-pornography moderation as well. What Texas and Florida want is for political content moderation, that is to say, attempts to counter lies and disinformation, to disappear, and they are willing to give spammers and pornographers a cause of action in order to achieve that end.

Ultimately, the transparency question is a variant of the general problem: When a nation faces a democratic decline, what kind of state powers accelerate that decline and what might slow it down? The First Amendment does not appear able to answer that question.

Rebecca Tushnet

***

Citation: Rebecca Tushnet, Content Moderation and Disclosure: The Cautionary US Experience, ALTI Forum, June 23, 2022

Invited by Thibault Schrepel

Subscribe to our newsletter