Contact Us


Disruptive Competition Project

655 15th St., NW

Suite 410


Washington, D.C. 20005

Phone: (202) 783-0070
Fax: (202) 783-0534

Contact Us

Please fill out this form and we will get in touch with you shortly.
Close

Maximizing the Impact of the UN’s Proposed Global Digital Compact

GENEVA, SWITZERLAND – December 17, 2017: Allee des Nations (Avenue of Nations) of the United Nations Palace in Geneva, with the flags of the member countries.

The United Nations is in the process of developing what it dubs a “Global Digital Compact” that would “outline shared principles for an open, free and secure digital future for all.” To ensure this initiative is constructive, it should center collaboration between the private sector and governing bodies. To achieve this, the Global Digital Compact should focus on harnessing the benefits of technology through achievable commitments and cooperation rather than imposing rigid or duplicative regulatory and governance frameworks that may prove unsuccessful and could contribute to fragmenting digital ecosystems.

The Global Digital Compact seeks to enshrine principles and commitments that leverage burgeoning digital technologies and services to further broader goals of the UN such as promoting opportunity and economic equity, achieving the UN’s 17 Sustainable Development Goals (SDGs), bridging the digital divide, and protecting human rights, among others. The initiative, led by the Permanent Missions to the UN of Sweden and Zambia, aims for the Global Digital Compact to be adopted in September at the Summit of the Future.

In late February, the Co-facilitators of the initiative released a document outlining the “possible elements” that could be included in the full draft of the Global Digital Compact later this year (potentially as early as April or May). The outline delineates broad principles for governments to engage in “digital cooperation” as well as a range of commitments to protect and promote human rights organized in the following categories:

  1. “Closing the digital divides and accelerating progress across the SDGs”;
  2. “Fostering an inclusive, open, safe, secure digital space”; 
  3. “Advancing data governance”; and
  4. “Governing emerging technologies, including Artificial Intelligence, for humanity” (which is likely to be influenced by the work of the UN’s High-Level Advisory Body on Artificial Intelligence).

The Global Digital Compact provides an important forum to potentially harmonize government and industry approaches for unlocking economic opportunity and addressing societal needs. However, the principles and commitments must apply to a wide range of national governmental and political contexts and levels of economic and infrastructure development for it to be effective. The commitments set by this initiative would not only apply to countries that are home to vibrant technology sectors and large digital services consumer bases, it would also need to be relevant in countries that are still in the process of constructing both digital (broadband) and traditional (power, roads, and railways) infrastructure.

How the Global Digital Compact Can Focus Public-Private Cooperation In Key Issues

The Global Digital Compact can help orchestrate the building of government-led public-private partnerships to further goals relating to expanding access to digital technologies, promoting a free and open internet, enhancing online trust and safety, and protecting human rights online.

For example, the Global Digital Compact includes a commitment to foster an “inclusive, open, safe, secure digital space,” to promote “a universal, free, open, interoperable, safe, reliable and secure [i]nternet,” and to ensure that “human rights are protected online and offline.” This continues the effort to preserve a free and open internet, in line with the resolution passed at the 47th session of the Human Rights Council in 2021 regarding “[t]he promotion, protection and enjoyment of human rights on the [i]nternet.” Ensuring online services providers are free to operate free of undue discrimination or unreasonable demands by telecommunications service providers, protecting freedom of expression online, and committing not to impose internet shutdowns are all crucial commitments for UN members to make to benefit society.

The Global Digital Compact also includes a commitment to “[h]arness data to track, target and accelerate progress across” the UN’s 17 Sustainable Development Goals (SDGs) that aim to tackle issues including health, education, inequality, economic growth, environmental protection, and climate change by 2030. This commitment reflects the immense potential of innovative uses of AI to address urgent needs in the global community that should be unleashed, not constrained, through fora such as the Global Digital Compact. As such, the commitments in the Global Digital Compact could be significant to streamlining cooperation between innovators, business leaders, and development organizations to unlock AI’s potential to accelerate the UN’s work to enact its “plan of action for people, planet and prosperity” through the SDGs. 

One example of AI being directly relevant to the UN’s broader efforts is how digital services and AI are fundamental tools for policymakers and governing bodies to leverage public data to monitor the progress of the SDGs. To fully achieve the SDGs by 2030, innovative methods of addressing problem areas and reinforcing methods that have been effective through emerging AI techniques will be needed, and further incentivizing the work of the private sector to harness technology to reach these goals should be an approach that is baked into the Global Digital Compact.

The UN’s Advisory Body on AI reflects a promising example of convening governments, civil society, and industry together to effectively issue guidance in the AI realm, based on risk-based models and leveraging, to the extent possible, voluntary consensus international standards. Due to the fast-moving nature of AI technological development, expertise on which is currently mainly outside of governments, stakeholder groups such as the Advisory Body and the Global Digital Compact offer insight into the potential policies based on best practices that offer technical and commercially feasible solutions to protect personal privacy and set guardrails that protect human rights.  

In particular, multi-stakeholder initiatives such as the Global Digital Compact can ensure there is buy-in from the private sector, civil society, and governing bodies if calibrated to be collaborative rather than combative. By doing this, the UN can ensure that the Global Digital Compact’s commitments are both effective and viewed as legitimate in the eyes of the actors central to carrying them out on a national and international scale. 

Areas Where the Global Digital Compact Should Tread Lightly 

While the Global Digital Compact can usefully focus on the positive collaboration opportunities available in identified issue areas, the outline document has also proposes including commitments in areas that could lead to overlapping jurisdictional issues and create more tension between governing bodies and private sector actors.

First, the Global Digital Compact seeks to adopt a commitment to, “Advance digital trust and safety, including specific measures to protect women, children, youth and persons in vulnerable situations against harms.” To the extent that UN Member States commit to digital trust and safety policies, governing bodies should work alongside industry bodies that have conducted studies and developed best practices following extensive work in this field, rather than prescribe specific solutions.  

For example, the Digital Trust & Safety Partnership (DTSP) and similar initiatives have worked in tandem with international governing structures and national governments to build on industry observations and craft effective measures to improve online experiences for all of society. DTSP reflects how industry and civil society can collaborate to build methods to promote strong industry standards in a non-disruptive, yet impactful manner. DTSP’s process seeks to align with the product development cycle at the heart of the services impacted, and targets commitments to implement trust and safety at levels ranging from product development to deployment. Partner companies make five overarching commitments and DTSP has articulated 35 concrete best practices that provide examples of how partners address harmful content and conduct. This industry-led body has iteratively developed best practices through robust testing of their effectiveness.

Second, how the Global Digital Compact proceeds with regard to more fluid and nascent issues such as artificial intelligence and what has been dubbed “Digital Public Infrastructure” (or DPI) requires more caution and precision. There is no established definition for DPI—as reflected by the UN Development Programme’s website dedicated to the subject—and the specific meaning of the concept is evolving, ill-defined, and is not relevant to all markets. The G20 has described DPI in a an extremely broad manner as “shared digital systems… [that] can be built on open standards and specifications to deliver and provide equitable access to public and or private services at societal scale and are governed by applicable legal frameworks and enabling rules to drive development, inclusion, innovation, trust, and competition and respect human rights and fundamental freedoms.”  

However, how DPI is interpreted and applied at the national level could vary wildly depending on the form of government, existing infrastructure, and regulatory frameworks in place. While DPI could serve as a bridge between digital services and the delivery of crucial services through payment, identification, and data transfer, if the measures are imposed in a manner that hinders companies’ operations and their ability to innovate, it can undermine the long-term viability of such digital infrastructure by disincentivizing participation or displacing effective market-based solutions. Further, the bulk data collection required to implement the digital identification and seamless electronic payment systems that are sought through DPI bring about potentially significant data privacy concerns, particularly in regimes that are authoritarian leaning. If DPI were to become ubiquitous in certain urban areas, governments could build near-constant tracking of citizens’ movements and actions.

Broadly, government intervention and competition in digital infrastructure is only helpful, necessary, and appropriate when it is introduced to solve a demonstrated market failure. As detailed by world-renowned economists Joseph E. Stiglitz, Peter R. Orszag, and Jonathan M. Orszag 25 years ago, “The government should exercise substantial caution in entering markets in which private-sector firms are active,” further elaborating that governments should “generally not enter markets to provide more competition to existing firms,” and instead should leverage other regulatory redress mechanisms or incentives such as taxes and subsidies.

Finally, as the UN addresses AI in the context of these commitments, the potential benefits of technologies should be reflected in the Global Digital Compact. The United Nations AI Advisory Body’s Governing AI for Humanity interim report from December 2023 highlights the importance of AI well, noting that “AI has the potential to transform access to knowledge and increase efficiency around the world,” including in assisting individuals’ everyday needs such as education, improving food security by bolstering agriculture, advancing healthcare, catalyzing scientific and disease research, harnessing data to lead efforts in environmental conservation, and supporting public services.

To ensure these potential benefits are unlocked on the global scale necessary, the UN’s governance guidelines for emerging technologies should reflect a commitment to execute meticulous and careful study of the specific potential harms and implement flexible, risk-based  regulatory regimes and discourage the adoption of unduly rigid and burdensome rules that could undermine countries’ ability to harness AI’s full potential. This is particularly important as there are a range of competing approaches to AI oversight being pursued globally currently—the lack of consensus could lead to conflicting frameworks if the Global Digital Compact adopts overly prescriptive commitments.  

As the UN moves forward with commitments related to AI governance, the following should be front and center:

  1. Definitions for AI governance should be consistent and aligned, with responsibilities clearly delineated between developers, deployers and end users.  International approaches such as the Hiroshima AI process should be followed in this regard.
  2. AI guidelines should not duplicate or impede use of existing laws and regulations, as they could lead to unnecessarily hindering innovative approaches to AI use without substantially improving oversight or protection against the targeted potential harm.  As the UN AI Advisory Body’s Governing AI for Humanity December 2023 interim report states: “To be effective, the international governance of AI must be guided by principles and implemented through clear functions. These global functions must add value, fill identified gaps, and enable interoperable action at regional, national, industry, and community levels. They must be performed in concert across international institutions, national and regional frameworks as well as the private sector.”
  3. An overarching and foundational goal of the pursuit of measures that are reasonable and tailored for each situation and account for the benefits, risks, and costly burdens involved should be part of proposed AI guidelines.

Digital Trade

Companies rely on clear, predictable rules that facilitate digital trade to export their products and services around the world. These rules include balancing the competing interests between encouraging investment and enabling information access; promoting the free flow of information online; and maintaining balanced intermediary liability regimes.