OECD Endorses Governing Principles on Artificial Intelligence
Increased adoption of emerging technologies such as artificial intelligence (AI) can positively contribute to global economic activity, innovation, productivity, and other key global challenges. While AI has many potential benefits, some questions have been raised regarding its impact on competition, human rights, and economic inequalities. Recognizing the potential impacts of artificial intelligence, the Organization for Economic Cooperation and Development (OECD) recently developed guidance at the international level on how to achieve the adoption of trustworthy AI.
In May 2018, the OECD’s Committee on Digital Economic Policy (CDEP) agreed to form an expert group on AI. This group was subsequently established, comprising over 50 experts from different disciplines and sectors, including the government, industry, trade unions, and academia.
A year later, in May 2019, the OECD approved the Recommendation of the Council of Artificial Intelligence which puts forth five key principles. The principles set out in the Recommendation are the first of their kind and “set [standards] for AI that are practical and flexible enough to stand the test of time in a rapidly evolving field.”
The OECD Principles included in the Recommendation are:
- Inclusive growth, sustainable development and well-being. Stakeholders are encouraged to “proactively engage in stewardship of AI” in order to pursue beneficial humanitarian outcomes.
- Human-centered values and fairness. AI actors (defined in the OECD Principles as “those who play an active role in the AI system lifecycle”) that deploy and develop AI should respect the “rule of law, human rights, and democratic values.” Further, AI actors should put safeguards in place that allow for relevant and necessary levels of human determination within the context of the AI’s application.
- Transparency and explainability. “AI actors should commit to transparency and responsible disclosure regarding AI systems,” including fostering a general understanding of AI systems and enabling those affected by AI to understand its outcome. This requires the provision of meaningful information that would allow stakeholders and those affected by an AI system to develop an awareness and understanding of AI systems and the ability to challenge system outcomes.
- Robustness, security and safety. AI systems should be “robust, secure and safe”; they should also ensure traceability of data, processes, and decisions. This principle calls for the application of a “systematic risk management approach” to each part of the system’s life cycle.
- Accountability. The principles conclude with an encompassing aim for accountability – that AI actors should be the ones held accountable for the above principles and for the functioning of AI systems.
The second part of the Recommendation focused on what governments could do in implementing their national policies on AI.
There are many other efforts in creating recommendations on AI. However, when these principles are pursued unilaterally, they may only have localized impacts. For example, the EU has developed a framework on how to implement AI in an ethical way, and Canada has included the Pan-Canadian Artificial Intelligence Strategy in its 2017 federal budget. In contrast, OECD member countries and six non-member countries have signed on to the OECD Recommendation. While the Recommendation reflects the priorities of member countries, the OECD is aiming for international support on these guidelines and welcomes additional supporters from any country.
The OECD Secretary-General Angel Gurria believes that these principles will represent a new core, or “global reference point” of AI governance, and others have stated that the OECD’s principles “provide a clear orientation to what are the fundamental values that need to be respected.”
The Recommendation was well-received globally. In a speech at the OECD forum, Deputy Assistant to the United States President for Technology Policy Michael Kratsios referred to the OECD Recommendation as “a historic step.”
While OECD Recommendations are not legally binding, OECD outputs are influential and have many times formed the basis of international standards and helped governments design national legislation. For example, the OECD Privacy Guidelines first adopted in 1980 (and most recently updated in 2013) underlie many privacy laws and frameworks in the United States, Europe and Asia and are referenced in trade agreements.
At the June 2019 summit, G20 trade ministers and digital economy ministers released “G20 AI Principles” which draw heavily from the five OECD Principles. The summit sought to strengthen G20 trade and digital economic policy cooperation. In a joint statement issued by the G20, the ministers focused on the responsible development and stewardship of trustworthy AI. They also committed to a more human-centered approach to AI in order to build the public’s confidence in the emerging technology.
The G20’s endorsement of the OECD principles introduces countries across the globe to the fundamental ideas behind the principles. These principles act as important guideposts as the world embarks on the development and growth of artificial intelligence. The major influence of artificial intelligence on several industries, and its potential, make guiding principles even more necessary in order to make the best use of this emerging technology.