5 Misconceptions We’re Likely to Hear at Tomorrow’s DMCA Hearing
Tomorrow, the House Judiciary’s Subcommittee on Courts, Intellectual Property and the Internet is holding a hearing on the safe harbors of the Digital Millennium Copyright Act, Section 512, as part of a continuing reexamination of U.S. copyright law. We cover this important framework frequently, because it has been instrumental to the growth of the Internet – by many accounts online safe harbors “saved the Web.” Now that the DMCA is over 15 years old, a number of pervasive misconceptions have developed about its safe harbors. Let’s consider some of the top DMCA misconceptions that we’re likely to hear tomorrow.
(1) “No one anticipated there would be so many DMCA takedowns.” (Also, “filter because whack-a-mole.”)
One of the critiques of the DMCA is that because it is used so frequently, it must not be working. It is strange to argue that a system isn’t working when demand for it is going up, but I hear ‘the Internet must be filtered because whack-a-mole’ so often that if I owned the Whac-A-Mole trademark I’d be worried about it going generic.
It is true that DMCA takedowns are increasing. This suggests rights-holders see value in the system, and that third-party takedown vendors are enabling more rights-holders to outsource policing their content at lower cost. When Congress enacted the DMCA, it specifically legislated that online services have no continuing obligation to monitor Internet content, in Section 512(m)(1), acknowledging that there would be continued costs to enforcing rights through the DMCA. Congress recognized that assigning this responsibility to online service providers ill-equipped to execute it would hamper the growth of online commerce. It thus forged a compromise, ensuring that rights-holders would need to initiate takedowns, but would receive expeditious, extralegal relief in response to a complaint without the time and expense of going to court. Some would prefer to unwind the compromise struck in 1998, however, and shift more of the burden of enforcing copyrights to service providers, perhaps through some form of content filtering.
(2) “Anyone can see there’s a lot of copyrighted content on the Internet.”
It is not possible to ascertain on sight which works are copyrighted and which are not. U.S. copyright law no longer requires ‘marking.’ More importantly, even for works with identifying information, it is impossible to determine which uses of works are licensed, or otherwise authorized by law. One might simply assume that every digital file is copyrighted and be right most of the time. This blog post, every email and selfie, and every cat video receives instant copyright protection. With copyright’s low threshold of creativity, instantaneously attaching protection, and exceptionally long terms, it is hard for something not to be copyrighted. But what should one do with the knowledge that every photo and email gets a century-plus of protection? Obviously, no one wants to censor the famous Oscar “selfie” that went viral just because the copyright is owned by Ellen DeGeneres (or Bradley Cooper, or Samsung, or whomever…). The fact that something is protected does not mean the rights-holder doesn’t want it online.
Often, when people refer to “copyrighted” content in this context, they actually mean “industrially produced creative works.” More specifically, when someone says “there’s a lot of copyrighted content online,” what they really mean to say is “there’s a lot of stuff online that seems so professional that we should assume it was made by an industrial content producer, and that we should also assume they don’t want it online.” Of course, this isn’t a particularly clear line: what metric should a hypothetical army of content reviewers apply in deciding how professional a work should be before it is purged from the Internet?
Even if there were a clear line, rights-holders of industrially produced creative works often approve of works being used online for promotional purposes. And it isn’t just Oscar selfies. In the Viacom v. YouTube litigation, it came out that Viacom’s marketing teams were secretly uploading its own works to YouTube, even after the lawsuit began, and its lawyers sued over works that had been uploaded to the site by its own personnel.
Given billions of indexable pages, the absence of a reliable list (government-maintained or otherwise) of who owns or has licensed what, and the inherently complex contours of copyright’s exclusive rights and exceptions, only rights-holders are positioned to begin the process of enforcing their own rights.
(3) Infringing content is “illegal.”
It is common to refer to infringing works as “illegal,” but this actually clouds the fact that the Copyright Act regulates actions, not content. That is, a pirated work itself is not what violates the Copyright Act; the law is violated by the act of reproducing a work without authorization or as permitted by an exception. In many cases this distinction doesn’t matter. That doesn’t mean the point is mere semantics, however. Because two different acts reproducing the same protected work may be alternately permitted and unlawful, depending on who did it and why, it is important to separate the infringement from the content. For example, a law professor posting to YouTube the copyright announcement on an NFL game for her students to analyze in the classroom would constitute fair use, whereas an individual doing the same without any educational purpose might be labelled an infringer. There’s nothing illegal about NFL football games (well, usually): it’s the act that matters. Thus, it is important not to lose this distinction.
(4) “Services filter for other illegal content; so they can filter for infringement.”
The previous point leads to this logical flaw. The notion is that because some services attempt to filter wholly unlawful content, such as child pornography, they should also filter for lawful content whose use may be unlawful. However, filtering any unauthorized Harry Potter for example, might also filter out Harry Potter reviews, book reports, and cultural studies. Again, lawful content, used unlawfully, is not the same as unlawful content. One cannot filter when lawfulness is context- and user-dependent, and even if that were possible, it is rarely stated when a use has been authorized.
(5) DMCA compliance is mandatory.
It isn’t. Service providers may comply with the DMCA in exchange for the promise of liability limitations, and many services within the US and abroad (over 66,000 at last count) do so. Notwithstanding that DMCA compliance can be a significant expense, particularly for smaller services, it is viewed by many as the cost of market entry: a regulatory obligation undertaken by responsible businesses.
That being said, DMCA compliance is not compulsory, a service can decline to comply with takedown requests — or even ignore them altogether. As a standard business practice, this is a terribly bad idea, because of the potential exposure to copyright’s notoriously large statutory damages. At times, however, online services can and do rightly refuse to comply with abusive requests by bad actors. Attempts to get one’s business competition kicked offline, or to suppress criticism, embarrassing news, or disfavored speech are increasingly common. (E.g., , , , ). In each case that an online service stands up for users, however, it risks extraordinary liability if a court should later side with a complainant who the service initially concluded was not acting in good faith.
[This post was cross-posted on Techdirt.]