AICOA’s Data Security, Privacy, and Content Moderation Issues Call for Risk Assessment
In recent weeks, a growing number of experts have expressed concerns over AICOA, S.2992. The variety of objections to the bill reinforce that it continues to suffer from several critical defects. Chief among those defects are sections that compel regulated companies to provide access and data to competing third-parties as well as the bill’s impact on content moderation. In the wake of the various criticisms, Senator Klobuchar has updated AICOA with some changes to the text; however, this third iteration of the bill does not alleviate any of the previously identified concerns. In fact, in some areas the revised bill raises even further questions and concerns. As the Wall Street Journal also noted, the bill would lead to harming consumer and U.S. innovation.
AICOA’s Access Mandate
The ‘access’ mandate in AICOA sec. 3(a)(4) states that regulated companies cannot “materially restrict, impede, or unreasonably delay the capacity of a business user to access or interoperate with the same platform, operating system, or hardware or software features…” Interoperability is a key element of open ecosystems and the use of APIs (application programming interface) to enable different software applications to communicate with each other is common. But developing and making APIs available to others is and should remain a product development choice, and in any event, sec.3(a)(4) goes far beyond the concept of APIs.
Complying with this requirement to grant access to any platform, OS, hardware, or feature means giving unfettered access to any putative competitor. Leading cloud services would seemingly need to grant access to back-end infrastructure and physical hardwareーthe same infrastructure and hardware that supports essential sectors like healthcare, energy, and banking and finance, to say nothing of state and federal governments. Similarly, device manufactures regularly restrict access to APIs that grant full read/write access to the device–that is necessary to perform backups but can easily be abused in ransomware attacks–or that grant access to sensitive information like health data or mobile payments. These private APIs keep sensitive data and permissions secure and opening them up to all comers represents a major privacy and security risk.
Another significant concern present in the AICOA is the language around data privacy measures. While the bill purports to deal with privacy concerns, in reality, it would hinder covered platforms from protecting their users’ privacy. Sec. 3(a)(8) would “materially restrict or impede covered platform users from uninstalling software applications that have been preinstalled on the covered platform or changing default settings that direct or steer covered platform users to products or services offered by the covered platform operator.” Companies such as Google have repeatedly warned that the AICOA could put their customers’ data at risk by preventing them from “integrating automated security features.”
AICOA’s Data Transfer Mandate
A second problematic mandate in AICOA is the bill’s compulsory data transfer provision. Sec. 3(a)(7) prohibits a regulated platform from taking steps to “materially restrict or impede a business user from accessing data generated on the covered platform… through an interaction of a covered platform user with the products or services of the business user”. As an obvious example of why this is problematic, if a software platform collects user analytics to ensure developers using its environment are not bad actors, those analytics could be demanded by the very adversary the platform seeks to catch. Indeed, this mandate openly invites data aggregators to use apps as a Trojan horse to collect information on a software ecosystem’s users.
The potential consequences of these provisions have raised alarms in the national security community. Recently, former National Security Advisor Robert O’Brien and former Secretary of Homeland Security Jeh Johnson wrote in Newsweek that “ national security risks have been acknowledged by the legislation’s sponsors. Nevertheless, they have not been addressed as part of any meaningful review of the legislation for their implications to U.S. national security.”
The same day, former General Counsel of the NSA, Glenn Gerstell called for “penetration testing” to assess vulnerabilities likely to be created, and suggested “asking the national cyber director to coordinate an expedited review” by key departments. These perspectives aligned with the assessment of Rep. Eric Swalwell, who in a May 10 column wrote:
Instead of pushing these bills through Congress, we should take a step back and think through the unintended consequences. Forcing platforms to interoperate with untrustworthy entities or make it practically impossible for them to refuse integration with apps and other plug-ins that trade in hate speech, vaccine misinformation, violence-inciting rhetoric or low-quality products or services that increase security risks is just not the right answer.
Impact on Content Moderation
The bill inhibits content moderation and thus the basis for many Internet services such as social media, blogs, image sharing, forums, and comment sections. Sec. 3(a)(3) makes it unlawful for covered platforms to “discriminate in the application or enforcement of the terms of service of the covered platform among similarly situated business users in a manner that would materially harm competition”. This regulation would mean that covered platforms are acting unlawfully and discriminating against certain users when they attempt to address hate speech, disinformation, or even just spam, on their services by practicing content moderation. As other experts, including in the Washington Post, have observed, AICOA could lead to more disinformation. At the same time, there have been concerns among senators that the bill could undermine efforts to combat hate speech and disinformation. In fact, an additional change in the third iteration of the bill makes it clear that it is unlawful to act “in a manner that is inconsistent with the neutral, fair, and non-discriminatory treatment of all business users in sec. 3(a)(9). In the previous version of the bill, it was an open question as to whether covered platforms could establish their own standards. Under the most recent revision, the bill makes clear that regulated platforms must be “neutral.” Yet policymakers, Internet users, and advertisers have all made clear that they do not expect Internet platforms to be neutral toward purveyors of hate, intolerance, Russian war propaganda, extremism, and a host of other problematic types of content online. For example, Gab or InfoWars could easily argue that they are similarly situated to other media sources that are allowed on the major app stores operated by covered platforms. Given the recent litigation positions and public pronouncements from state Attorneys General in Florida and Texas, it is quite easy to imagine how an AG in either of those states might use this provision to force those app stores to distribute this content.
Latest Bill Revisions Exacerbate These Problems
The third iteration of the bill does not meaningfully address any of these concerns. In fact, it adds language to the platform interoperability and access requirements concerning security risks by creating an exemption in Sec. 3(a)(4) that reads: “except where such access would lead to a significant cybersecurity risk”. This broad exemption will not alleviate any of the previous issues but will further add to the lack of clarity of the bill. It will also leave companies open to litigation for every cybersecurity decision if those are considered just “moderate” or “emerging”, which again raises a host of national security concerns. Indeed, the term “significant cybersecurity risk” has been used in other regulatory and legislative contexts to refer only to the most extreme risks (i.e., a cyber attack on critical infrastructure like the energy grid). When coupled with the exceedingly difficult burden to meet affirmative defenses, this language risks implying that more ordinary cyberattacks such as scams, ransomware attacks, and fraud, or other forms of malware may be excluded from the exception.
Furthermore, the Rules of Construction limiting the list of bad actors subject to removal to those that appear on lists “maintained by the Federal Government” is highly inadequate as many bad actors are unknown or simply not present on these lists. Requiring covered platforms to wait for the threats to be identified and then added to a federal government list gives bad actors further time and opportunity to continue to take advantage of covered platforms’ customers. Limiting the list of bad actors subject to removal to those that appear on lists “maintained by the Federal Government” would make it substantially harder to detect and address security risks across products. DisCo has previously highlighted the national security concerns present in this legislation, none of which was addressed in the recently released third iteration of the bill.
…Narrow Defenses and Definitions
Bill proponents acknowledge these risks, and point to affirmative defenses and definitions that ostensibly mitigate them. However, the third iteration’s minor changes to the language on affirmative defenses do nothing to alleviate these concerns. The bill’s affirmative defense in sec. 3(b)(1) requires that any design restriction rooted in security concerns must be “reasonably tailored” and “reasonably necessary” and that it “could not be achieved through materially less discriminatory means”. The changes including adding undefined terms such as “materially less discriminatory means” raise a number of additional concerns regarding the clarity and impact of the bill. But proving the non-existence of any less discriminatory approach would be a Sisyphean task, since one can always hypothesize new ways of responding to a security issue, particularly when the bill offers no yardstick for measuring discrimination comparatively. As such, the bill’s language on affirmative defenses creates uncertainty and undermines any effort to protect welfare-enhancing conduct on digital platforms.
The bill’s definitions are similarly unhelpful. Proponents argue that regulated companies need not worry about being compelled to provide access to bad actors or foreign adversaries because the bill excludes from its beneficiaries “clear national security risk[s]” and entities “controlled by the Government of the People’s Republic of China or the government of another foreign adversary.” But as previously covered here, these definitions are quite narrow, and overlook the fact that adversaries do not advertise their identity or may come from other hostile powers such as North Korea, Iran, or Russia.
Toward Risk-informed Legislation
A risk-based assessment of AICOA’s impact would point toward remedies for these deficiencies. This is notable given the timing: even as AICOA singles out U.S. firms for special regulation, China has pulled back proposed regulations and eased off regulatory enforcement after the government received a stake in some of its leading tech companies. At the least, legislation should ensure digital services can appropriately weigh privacy and data security against access to third parties, and put users first. More broadly, a risk assessment might lead policymakers to reconsider why the bill is gerrymandered around leading U.S. technology firms, while excluding key foreign rivals and, as a new exemption added to the third iteration of the bill, telecommunications and payment companies. Without proper consideration of these risks, AICOA will end up harming both competition and consumers by breaking reliable and convenient products and digital services.