As part of the Digital Single Market (DSM) communication, the European Commission has discussed the possibility of imposing a “duty of care” on Internet intermediaries, which would require Internet platforms to take a more active role in policing user content. Forcing Internet intermediaries to monitor and remove their users’ communications is ill-advised from both an economic and human rights standpoint.
The rapid growth of the Internet was not merely the function of technological innovation. This fundamental restructuring of commerce and communications would not have been possible but for substantive legal reforms that adapted legacy legal concepts to comport with the realities of a hyper-connected Internet age. Arguably the most important legal and legislative development of the Internet era was the concept of intermediary liability limitations for Internet service providers. Or, stated in a less legalistic way, the policy choice that Internet services should not bear blame for bad people saying or doing bad things on the Internet. Given the size and scope of the Internet and the volume of online communications, it is safe to say that Facebook, Twitter, Google, Yelp, YouTube, Allegro, and Dailymotion would not exist today if the law evolved to hold websites and Internet services liable for the actions of their users. Further, imagine operating a telecommunications network with the sum of all this information passing through without being shielded from responsibility for the actions of all of your users. What venture capitalist in her right mind would invest in a platform that was exposed to liability for billions of websites beyond its control or trillions of posts composed by third parties? What would Internet business models look like if companies had to pre-screen all user communications before they went live?
Recent developments in Europe, including the Delfi ruling and the DSM “duty of care” proposal, suggest that Internet services may soon be asked to take a more active role in filtering user content. Yet even with advanced filtering tools, unlawful speech is almost always context dependent. Libel and defamation would not be obvious to a filter. Even more complex is when lawful speech is used unlawfully, as in the case of copyright and trademark infringement. Given that rules about these various types of speech are often the product of complex legal cases, even human review of every online communication would not completely shield an Internet company from liability, given that different people can come to different conclusions about whether speech is “harmful.” Not to mention that standards for what is permissible speech vary widely from country to country.
Besides the commercial impact, the implications for free speech would also be disastrous. Protections from intermediary liability enable platforms to give people around the world a simple way to express themselves and to share what they love with the world, and to challenge the restrictions of oppressive governments. One study found that when online platforms are regulated on the basis of content submitted by their users, they remove large amounts of controversial but legal content for fear of facing penalties. The UN’s Joint Declaration on Freedom of Expression on the Internet recognizes the success of laws such as the CDA, DMCA, and the E-Commerce Directive, stating that “intermediaries should not be required to monitor user-generated content and should not be subject to extrajudicial content takedown rules which fail to provide sufficient protection for freedom of expression.”
Even if pre-screening and filtering at scale were feasible, the value of each individual communication — whether it takes the form of a website, a tweet, a Facebook post or a YouTube video — is negligible, where the potential legal exposure is huge; the potential damages for copyright law can reach $150,000 per work infringed. So, in a world where Internet companies were liable for the communications of their users, a rational company would be incentivized to aggressively censor content, leading to significant blocking of ostensibly legal speech as the costs of under blocking are significantly more than the costs of over blocking.MORE »