How to Handle Speech Online: Q&A with Prof. Annemarie Bridy
According to Annemarie Bridy, Professor of Law at the University of Idaho and Affiliate Scholar at Stanford Law School Center for Internet and Society, the Internet is facing some hard questions.
“How do we limit the spread of hate speech, harassment, and disinformation on billion-user platforms without building an automated censorship machine?” Bridy asks, and “How do we protect the rights of provocative speakers while also caring about the dignity of listeners and the overall quality of the online information ecosystem?”
Bridy’s latest paper, “Remediating Social Media: Why Layers Still Matter for Internet Policy,” addresses these very questions. DisCo spoke with Bridy to discuss her paper, and we summarize that conversation in this blog post. We recommend reading her full paper (available here) for a deep dive into some of the topics we discussed in our Q&A session.
DisCo: Why should Internet regulation be, as you call it, “layer-conscious”?
Bridy: Internet regulation should be layer-conscious because different layers of the network have different technical functions in an end-to-end architecture, which is the type of architecture the Internet is built on. Network providers like Verizon and Comcast act as common carriers and are responsible for routing data to and from endpoints specified by the network’s users. They provide the “pipes” through which content flows in the form of data packets. Edge providers create the applications and services that allow amateur and professional users to populate the Internet with content. The application layer of the Internet is the human experiential layer. It makes sense to regulate content at the application layer because that’s the layer at which people create, experience, and share content.
Different regulatory treatment of infrastructure and application providers also makes sense from the perspective of preventing, proving, and redressing speech-related harms. That’s true because the application layer is the layer at which bits, carried across the network, surface as content—as intelligible words and images that cause legally actionable injuries, including defamation, harassment, infliction of emotional distress, invasion of privacy, and intellectual property infringement. Users interact with each other, for good and ill, at the network’s edge.
DisCo: What is the “platform neutrality” approach, why has it gained prominence, and what are its drawbacks?
Bridy: Social media platforms have historically embraced the principle that they shouldn’t discriminate against content on the basis of political viewpoint. A Twitter executive once proudly called Twitter “the free speech wing of the free speech party.” These platforms were founded on the Internet’s promise that everyone should have a voice, and that principle is deeply embedded in their corporate cultures. But requiring platforms to carry all lawful speech—meaning all speech the First Amendment prevents the government from suppressing—would prevent them from taking steps to minimize hate speech, trolling, harassment, doxing, revenge porn, and disinformation. A wide swath of speech that most people would probably agree is toxic falls within the protection of the First Amendment. We need to think very hard about whether it would be socially beneficial to require social media platforms to display all of that speech. I don’t think it would be.
Setting aside the constitutional questions associated with imposing “must carry” obligations on social media platforms, there’s no indication of broad public support for such a requirement. On the contrary, a recent survey by the Pew Internet Project found that eight in ten American adults believe that online platforms should be doing more to police and prevent abuse online. When asked about balancing the ability of individuals to speak freely online and the creation of a welcoming environment for others, over half of respondents prioritized a welcoming environment.
DisCo: What are the primary laws governing these edge service providers and what was the motivation behind implementing them?
Bridy: The two most important laws governing how edge service providers handle user-generated content are the Communications Decency Act (CDA) of 1996 and the Digital Millennium Copyright Act (DMCA) of 1998. Together, they provide broad legal protection for edge service providers whose business models entail hosting and displaying large amounts of user-generated content. Without such protection, Congress believed, innovative online services could not launch and scale, because their founders and investors could not afford to assume the risk of unlimited liability for their users’ illegal speech.
§ 230 of the CDA shields covered service providers from being treated as speakers, publishers, or distributors of any illegal user-generated content they host, with the exception of content that infringes intellectual property rights. § 512 of the DMCA fills the gap that § 230 left with respect to claims involving intellectual property. Both laws encourage but don’t require providers to monitor their services for illegal and offensive content. The CDA accomplishes this through its “Good Samaritan” blocking provision, which relieves providers from liability for good faith removal of obscene or otherwise offensive speech.
DisCo: How are these laws (DMCA & CDA) effective if they immunize providers from illegal content posted on their platforms, and do not require providers to proactively monitor their services for illegal/infringing content?
Bridy: Under traditional common law rules, moderating third-party content can create liability for the moderator for any illegal content that gets through the moderator’s screening process. For online services that have millions of users posting content 24-7-365, the risk of legal liability that comes with moderating user content under common law rules is huge. In such a world, a rational, risk-avoiding service provider would rather be safe than sorry and would therefore simply decline to moderate.
Congress realized in the early days of the Internet that service providers operating under common law rules would have a strong disincentive to moderate user content on their services. Congress also believed that moderation to remove illegal and offensive content would actually be a good thing, particularly given children’s easy access to online services. Congress didn’t want to require service providers to police their services and networks, because it knew that such a requirement would entail unsustainable costs, especially for undercapitalized startups. At the same time, however, Congress didn’t want to discourage providers from monitoring their services if they had the resources and the inclination to do so. The solution Congress chose was to limit service providers’ liability for third-party content while encouraging, but not requiring, them to monitor their services for illegal or offensive content. Both the CDA and the DMCA do exactly that.
DisCo: How do you think the passage of the Fight Online Sex Trafficking Act (FOSTA) will impact content moderation?
Bridy: FOSTA’s likely effect will be to make service providers steer clear of any user-generated content that could even remotely be considered a basis for liability. We’ve seen this already with Craigslist, which eliminated its entire Personals section in response to FOSTA. For providers, it’s all about risk management. A provider facing ten years in prison for hosting the wrong kind of speech is not going to take time to think carefully about how to navigate ambiguous or close cases. Better safe than sorry. The incentive to overblock created by FOSTA has real implications for freedom of expression.
DisCo: What does responsible moderation look like, and why should it be implemented?
Bridy: In the article, I suggest three normative starting points for responsible moderation: clarity, consistency, and appealability.
Clarity: As Danielle Citron has pointed out, clarity in definitions for terms like “hate speech” and “terrorist material” is critical to prevent censorship creep—the expansion of speech policies beyond their original goals. Definitional clarity has other benefits, too. These include notice to users about the kind of speech culture a platform is trying to foster and facilitation of consistent enforcement by the platform. Citron suggests definitions of hate speech drawn from domestic tort law, domestic civil rights law, or international human rights law. Such definitions have the benefit of existing consensus and are supported by bodies of decisional law that clarify their boundaries in specific cases. Vague policies concerning the prevention of “abuse” and the removal of “abusive” or “inappropriate” content create uncertainty for users and provide a poor basis for platform moderators to make principled decisions about removals.
Consistency: It’s one thing for social media platforms to adopt well-defined content removal policies; it’s another for them to enforce those policies consistently given the almost inconceivable scale at which they now operate. Twitter, for example, processes 500 million tweets per day. YouTube ingests over 300 hours of user-uploaded video per minute. As public scrutiny of platforms’ moderation practices has increased, so too have concerns about their fairness and consistency. In the current era of political polarization, platforms need to be able to show that content removals and account suspensions are justified with reference to specific policies. The production of a clear justification for every removal and suspension can counter claims of arbitrariness and political bias.
Appealability: As platforms turbocharge their moderation operations in response to political pressure and shifting public norms, instances of mistaken content blocks and removals will inevitably increase. In addition to regulating categories of speech within the contemplation of § 230’s drafters, Facebook is now targeting fake identities, fake audiences, false facts, and false narratives in the interest of protecting election security and integrity. With this growth in the range of content subject to blocking and removal comes an increased risk of widespread private censorship. The movement in the EU toward tightened timelines—like the 24-hour turnaround in a recently adopted code of conduct for hate speech—will only exacerbate the over-removal problem. Assuring that social media users have a way to dispute content removals that they believe are unjustified must therefore be a core component of reinvigorated Good Samaritanism, which is what I advocate in the article.
Note: The author has not received funding from CCIA or any of its Members for the purpose of this blog post.