How Changes to Section 230 Could Affect AI Innovation
In the last few months, services based on Artificial Intelligence (AI), particularly Large Language Models (LLMs), have spread across the globe, drawing increased attention and focus in the financial markets and among lawmakers. Companies have begun incorporating AI and LLMs into their services. According to NVIDIA, LLMs are algorithms that can learn to “recognize, … predict and generate text and other content based on knowledge gained from massive datasets.” Many of these new LLMs are becoming widely used by the public, including ChatGPT which allows users to ask almost any question and provides them a personalized answer based on collecting and synthesizing information already available on the internet. ChatGPT’s success has led to a boom in AI technologies and capabilities with companies asking how they can better incorporate generative AI services into their systems to produce a better experience for users.
Last month, Microsoft announced its integration of OpenAI’s generative technology, ChatGPT, directly into its search engine, Bing, both in the form of a chat function and a search bar incorporated into the search interface. Although the full program is still in previews (there is even a waitlist to join), the trial software features on Bing’s website allow a user to “create a 3-course menu” or plan a “trivia quiz” for them and their friends. These features will grow as the program continues being rolled out. Although Bing’s AI functions are the most thoroughly integrated, Microsoft is not the only company to expand into the AI space. Recently, Google announced the launch of its AI chatbot, Bard, which utilizes Google’s Language Models for Dialogue Applications (LaMDA) in the U.S. and UK. The integration of AI has also been announced by Meta and even Instacart. These companies are using AI in a myriad of ways to enhance the user experience through interactive features and better customer service. The introduction and integration of AI demonstrates this new era of consumer-focused capabilities which focus more on delivering a personalized experience than ever before. Nonetheless, these capabilities arrive right as the Supreme Court looks to more clearly define the rights and legal responsibilities of digital platforms online which, as previewed by The Verge, may present either support or hurdles for the innovative world of technology depending on their ruling.
Section 230 and the Supreme Court
At the end of February, the Supreme Court heard oral argument in Gonzalez v. Google. The focal point in the case was whether recommendations by online platforms are entitled to protection from liability under Section 230 of the Communications Act, which grants websites and users immunity from liability for third-party content. Though a majority of the argument focused on YouTubecontent recommendations, which is owned by Respondent Google, Justice Gorsuch focused his questions on determining the limits of immunity as new technologies emerge. Justice Gorsuch discussed § 230(f)(4) and how, under this provision, the “picking and choosing” of content to display would be entitled to protection, but asked if anything that goes further would be covered under Section 230. Both Eric Schnapper, counsel for Petitioners, and Lisa Blatt, counsel for Respondent, agreed that this would be the correct understanding of the limits of Section 230. Blatt argued that the test to be utilized when determining if a website’s decision counts as their own content should be to ask whether the website affirmatively “endorsed” the content, while Schnapper said that anything presented to the user that is not the result of a user’s query would be considered new content. As The Verge speculated, and the actual oral argument confirmed, the Supreme Court in Gonzalez has posed an interesting question as to whether AI chatbots in tandem with search engines could be protected under Section 230.
The Gonzalez argument immediately sparked discussion as to whether AI chat services like ChatGPT qualify for immunity from civil claims under Section 230. Two days after the Gonzalez argument, Matt Perault, Director of the Center on Technology Policy at UNC Chapel Hill, posted an article on Lawfare arguing that “Section 230 won’t protect ChatGPT.” Section 230’s authors, Senator Ron Wyden and former Representative Chris Cox, also weighed in on the debate — stating that Section 230 would not protect ChatGPT as it creates content, rather than hosts it, when it answers users. In contrast, Jess Miers, Legal Advocacy Counsel at the Chamber of Progress, argued that AI services like ChatGPT would likely be protected, citing past precedent that affords Section 230 protections to other similar services, such as Google’s autofill function for searches. ChatGPT itself, when presented with the question, even gave mixed answers on the subject. As this is an emerging legal field, there are arguments for both sides, however these experts focused on ChatGPT and not the broader implications of AI-powered search engines, which pose different nuances for both sides of the argument.
The Case For—and Against—Section 230 Protection for AI
The question turns on whether AI-generated content is a creation of a website’s own content or merely a display of third-party content. In brief, the arguments are, first, there is a strong argument that AI search results would be protected by Section 230 as they are composed of third party material. The content drawn and utilized in Bing’s AI search bar chat comes from third-party content and search results that would normally be displayed in a recommended order. Further, Bing does not claim this content is their own — instead, they credit the sources used in the answer via footnotes. Additionally, because the AI is responding specifically to a user’s question and pulling together information that was requested, this type of activity could actually be immune from liability under Schnapper’s interpretation of Section 230. Schnapper argued that only decisions directly sponsored and requested by the user would be protected by Section 230, which, as Jess Miers suggested, could cover Bing and Google’s search engines that merely respond to a user’s request. Alternatively, by giving one response, even with crediting the original sources, it is possible that answer is seen as the platform providing the “best choice” rather than a list of answers. Therefore the AI’s answer could fit within Blatt’s definition of “endorsement” from the platform that she discussed in her oral argument and thus not be granted Section 230 immunity.
Second, the Gonzalez argument raised the question of which generative content would need to be covered by Section 230. Justice Gorsuch discussed heavily the idea of content generation in relation to AI — saying “Artificial Intelligence generates poetry,” and “goes beyond picking and choosing.” From Gorsuch’s perspective, it seems as though AI search engines could be excluded from Section 230 immunity, because they do create something new. However, there may be an argument that the search engine answers are not that generative in comparison to other generative AI software and therefore AI-powered search engines may not be seen as truly new content. DALL-E, for instance, creates new, original artwork from a “text description” provided by the user. DALL-E’s creations arguably fit more within this definition hypothesized by Justice Gorsuch during oral arguments than a summary of third-party content an AI-enabled search engine can provide for users. This demonstrates how Gorsuch’s test, though seemingly straightforward, could create some vague definitions instead of a clearer understanding of Section 230.
Expanding this discussion of liability to other fields of law, specifically copyright law,¹ there may also be additional reasons as to why AI-enabled search engines do qualify for Section 230 immunity. An article from Axios discusses the legal nuances of AI works, especially when the output relies on human interaction. The article states that the Copyright Office has recently determined that AI-created images could not be protected by copyright, because the user asking for the material was not its creator. Additionally, this decision has generated discussion by scholars within copyright law about the “requirement of ‘human authorship’,” as seen in the Association of Research Libraries’ interview with Jonathan Band. This rationale for denying copyright protection to generative AI content could also be applied to answers produced by AI-enabled search engines as no human is directly creating the conversational answers shown on the search pages.²
Expanding Innovation for Users and the Economy
Any holding, or dictum, that addresses AI could have lasting impacts on the future of AI and LLMs and potentially adverse consequences on a leading area of innovative technology. LLMs are quickly becoming a huge investment space for technology firms and have been incredibly influential in shaping technology. Forbes reported that prior to implementing ChatGPT into Bing, Microsoft invested $10 million into OpenAI and ChatGPT. The article also discussed how companies “like Alphabet and Amazon have already been investing billions of dollars into the field of AI-related research,” demonstrating this major shift in the industry of focusing on AI as the next big technology boom. Not only have LLMs and generative AI algorithms changed the investment landscape in the technology marketplace, but they have also started to change the way in which users engage with technology and internet platforms. As more and more individuals come to expect a personalized experience and, in fact, leave behind more traditional search engines and digital tools in favor of these personalized experiences, LLMs and AI-based services provide users with the more personalized experience they seek out.
Although AI offers this personalized experience by adapting and responding to the user, concerns regarding independently unlawful content remain . Both Verge and Axios point out that, given the conversational nature of search engine chatbots, users acting in bad faith could potentially trick the chatbot into producing defamatory speech, learning and then disseminating potentially incorrect answers, or teaching users about illegal subjects. Many may remember when Microsoft had to deactivate their AI Twitter chatbot, Tay, only 24 hours after it was launched in 2016. Nonetheless the use of ChatGPT — as an individual chatbot that learned generally from internet sources — has been met with widespread enthusiasm and acceptance. Companies are moving rapidly to adopt AI functionalities for employees. Microsoft, Salesforce, and even TikTok creators are using ChatGPT to send emails to recruiters or colleagues. Additionally, some small businesses are using AI-generated emails to grow their businesses by making it easier to talk with clients. This rapid deployment and adoption ChatGPT demonstrates that AI is on the cutting edge of development and is redefining the way that users interact online. However, if the environment and opportunity to expand this use of technology is diminished by the Supreme Court, as others like Perault have contemplated, then it is possible that companies will have less leeway to innovate, leading to users missing out on personalized experiences and revolutionary technologies.
The field of AI-learning and generative content is still evolving. We will not have any definitive answers prior to the Supreme Court possibly addressing the issue in its Gonzalez v. Google opinion, this article is merely a stepping stone to understanding what the issues surrounding AI use and innovation are as they become more prevalent. However, given that there appear to be large scale benefits to being the leaders in LLM technology, proactively restricting these services before they have moved out of their preview phases may stifle new innovation with AI. Instead, the developing field should be allowed to innovate until these AI-enabled search engines enter into the full public sphere thereby allowing policymakers, scholars, and users to analyze the costs and benefits of this new technology with a better understanding of its full capabilities much like how the Internet was allowed to develop under a relatively hands-off regulatory framework at the beginning. Nonetheless, this is a fascinating field and one that continues to develop rapidly as new companies pursue AI functionality at a frenzied pace, and innovators, legislators, and quite possibly the courts will be busy unpacking AI-enabled chatbots for the foreseeable future.
¹ As discussed in the Axios article, there are several emerging legal questions raised by AI creations within copyright law; however, the nuances surrounding copyright and AI are far greater than what can be captured in this article. For more information about these issues such as whether the use of data to train generative AI systems via scraping the internet without permission could be considered fair use, see Jonathan Band’s interview with ARL or Ryan Merkley’s Lawfare post on AI and Intellectual Property.
² It also makes logical sense to treat AI-powered results under Section 230 the same way AI is treated under copyright law – these fields of law are heavily interconnected and, thus, creating a standard specifically for Section 230, and treating AI differently in this one particular context, could result in confusion and an unnecessary legal distinction vis a vis copyright law.