Contact Us


Disruptive Competition Project

655 15th St., NW

Suite 410


Washington, D.C. 20005

Phone: (202) 783-0070
Fax: (202) 783-0534

Contact Us

Please fill out this form and we will get in touch with you shortly.
Close

A Closer Look at Trending Technology Narratives Around Children’s Online Safety

woman in blue sweater beside girl in blue sweater

From President Biden’s State of the Union to recent Congressional hearings, children’s safety continues to be an important issue for all policymakers and stakeholders. Last year, lawmakers introduced several bills, including the “Kids Online Safety Act”, in an attempt to reshape this ecosystem. Unfortunately, despite the well-intentioned efforts, such bills would have the opposite effect and undermine the safety of children and all users. To be clear, these challenges do not nor should not mean that children’s safety does not warrant these efforts, rather any successful approach must consider how existing policies and tools are being used for the betterment of children. We are hopeful a balance can be achieved without risking less user-generated content and abandoning end-to-end encryption and other privacy protections at the expense of users and more importantly, children. Below we identify and refute a few trending narratives around this issue in hopes of propelling the discussion forward. 

Narrative 1.  Companies have no duty to report CSAM and related materials 

Once aware, U.S. companies are legally required to report child sexual abuse materials (CSAM) to the National Center for Missing & Exploited Children (NCMEC) or face a fine of up to $150,000. Responsible technology companies – such as those involved with the Tech Coalition including Amazon, Cloudflare, and Google –  continue to increase efforts to detect such materials. NCMEC’s 2021 CyberTipline report noted, amongst other things, a 21% increase in the number of global companies deploying tools to detect CSAM and a 38 % increase in the number of reports filed (which “is actually a good thing”).

Narrative 2: Companies’ use of End to End Encryption (E2E) would undermine efforts in protecting children

A recurring  claim is that end-to-end encryption would represent an additional threat to child safety online, as it would prevent even platforms themselves from having visibility into users potentially exploiting children. It has been argued that when technology companies implement end-to-end encryption with no preventive measures built-in to detect known child sexual abuse material, the impact on child safety is devastating. With several of the largest platforms indicating that they would be moving to end-to-end encryption by default in 2023, it was warned that two-thirds of reports to the CyberTipline submitted by tech companies would go away and that these reports would be lost simply because tech companies would choose to stop looking for the material.

However, any proposal to undermine or weaken end-to-end encryption would do far more harm than good. As the UN Special Rapporteur on Freedom of Expression has described, E2E is the “most basic building block” for digital security on messaging apps. It plays a critical role in protecting the privacy and security of all people using digital communication channels – including children, minority groups, and vulnerable communities. As lawmakers continue to explore options for combatting this problem, we caution against rushing any alternatives like client-side scanning, which poses a serious risk by providing a “blueprint for mass surveillance, as it may not be possible for the user or civil society to monitor the hash list used by their device” to ensure it is only scanning for CSAM-related images.” 

Lastly, it must be emphasized that bodies such as U.K.’s National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online (REPHRAIN) have been working tirelessly in initiatives such as the Safety Tech Challenge Fund, which aims to “drive the development of innovative technologies that help keep children safe in end-to-end encrypted environments, whilst upholding user privacy.” We urge the U.S. officials to look into these types of ongoing practices to ensure that we can preserve consumers’ privacy while curbing harmful conduct.

Narrative 3: There is no regulatory agency in America with any meaningful power to control this

Another trending claim has been that there is no regulatory agency in the United States with any meaningful power to control or “rein in” American technology companies, with some participants of past Congressional hearings suggesting there should be a requirement that these platforms have to do independent audits. There were even suggestions to create a digital regulatory commission, approaching this from the consumer protection angle, that would have the power to shut sites down if they are not doing best business practices to protect children from sexual exploitation online.

In practice,  companies already have great reporting requirements to agencies such as the FTC and FCC. The creation of a new agency is not the answer, it is likely to do more harm than good. Specifically, the FTC and state attorney generals have sufficient powers, under COPPA, including their respective authorities to pursue unfair and deceptive practices, when it comes to the protection of children’s privacy and personal information. For instance, the video-game company Epic Games, the maker of Fortnite, entered into a settlement of more than half a billion dollars with the FTC over the violation of children’s privacy law. Likewise, the social network app Musical.ly agreed to a settlement with the agency due to similar charges. State AGs also fulfill an important role, as this settlement with Oath – formerly AOL – shows. All regulatory agencies must be digital regulatory agencies, as the economy increasingly shifts to an online environment. Creating another bureaucratic structure to this end would add yet more steps for companies to follow that could, ultimately, harm the U.S. economy.

Conclusion

Creating a safer and more trustworthy internet for all requires a whole-of-community approach. Responsible organizations continue to invest in new technologies, such as Microsoft’s PhotoDNA and Google’s Hash Matching API, to help law enforcement and other stakeholders combat this growing problem. Despite the challenges, collaborations like the Digital Trust & Safety Partnership, and adherence to best practices around content moderation and other important policies – such as strengthening digital literacy – provide a way forward without undermining the security and privacy of users and children. 

Privacy

Trust in the integrity and security of the Internet and associated products and services is essential to its success as a platform for digital communication and commerce. For this reason we’re committed to upholding and advocating for policymaking that empowers consumers to make informed choices in the marketplace while not impeding new business models.