What Is Section 230’s “Otherwise Objectionable” Provision?
Recent Congressional legislation and hearings have touched on the meaning of “otherwise objectionable” in the provision of federal law, Section 230, that encourages online intermediaries to moderate and remove problematic third-party content. Specifically: if Section 230(c)(2)(A) of the Telecommunications Act incentivizes digital services to “restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” — what does “otherwise objectionable” mean?
And more specifically, as Senate Commerce Committee Chairman Wicker inquired at a Congressional hearing yesterday, should that term be limited? A controversial proposal from the Department of Justice last month similarly suggested limiting the term “otherwise objectionable” to illegality and terrorism, and a bill introduced by several members of the U.S. House today proposes just that. If content isn’t illegal or terrorism, per the DOJ’s proposal, it would have to stay up.
Some concerns about “otherwise objectionable” arise from the view that the term is a blank check. But as former Representative Cox, a drafter of Section 230, explained to Chairman Wicker at yesterday’s hearing, “the words ‘otherwise objectionable’ have to be understood with reference to the list of specific things that precedes them. It’s not an open-ended grant of immunity for editing content for any unrelated reason.”
In short, the term “otherwise objectionable” envisions problematic content that may not be illegal but nevertheless violates community standards or norms, sometimes described as “lawful but awful.” Congress’s decision to use the more flexible term here acknowledged that it couldn’t foresee every form of such content.
If the provision “otherwise objectionable” were to be narrowed, digital services would be forced to give a pass to this content. What does that mean, specifically? The implications would be very troubling.
As I pointed out when DOJ’s Section 230 proposal was released, requiring digital services to host all content that is not unlawful or related to terrorism would open the door to anti-American lies by militant extremists, religious and ethnic intolerance, racism and hate speech, as well as public health-related misinformation, and election-related disinformation by foreign agents. Today, digital services tend to moderate this content quickly, consistent with their terms of service.
There are various forms of “otherwise objectionable” content that Congress did not explicitly anticipate in 1996, but which are certainly otherwise objectionable. Digital services frequently rely upon their terms of service to remove spam, and content that promotes animal cruelty or encourages self-harm, including suicide or eating disorders.
It is unlikely that Congress could have anticipated in 1996 that a future Internet user might encourage dangerous activity like eating laundry detergent pods, or advise that the coronavirus pandemic could be fought by drinking bleach. The term “otherwise objectionable” acknowledges this. It enables services to respond to this kind of problematic — though not necessarily unlawful — content, and prevent it from proliferating on the Internet.
Not all digital services take exactly the same approach to objectionable content, because one size does not fit all. The nature of a website or service is likely to affect how aggressively that service moderates its content. A site whose content is marketed toward younger audiences, for example, is likely to take a far firmer approach to moderation than a news forum where potentially upsetting or inflammatory content is almost certain to be discussed. The market for digital services is competitive, however, and some have attempted to chart a different course when it comes to content moderation.
One would assume that most Americans do not want an Internet where companies cannot respond to foreign intelligence operatives attempting to sow disinformation or anti-American extremists spreading hateful views online. However, if policymakers choose to limit digital services to removing only content that is illegal, that proposition may be tested.