Artificial intelligence (AI) has the potential to reshape society in a way not seen since the personal computer became widely available. But, as AI develops, so too do the calls for its regulation. Since its emergence, the technology has grown at a rapid pace and has begun to be applied to many aspects of society. As such, world leaders are calling for the development and adoption of technical standards to keep AI “trustworthy” and “accountable.”
Governments across the globe are taking action. Leaders such as President Biden and UK Prime Minister Sunak are meeting with AI experts, civil society, the private sector, and each other to promote responsible innovation and advancement in artificial intelligence. The European Parliament recently passed a new draft law known as the A.I. Act, which would put new restrictions on AI via a risk-based approach, would be one of the first major laws to regulate artificial intelligence. DisCo has previously discussed the EU’s A.I. Act, and cautions that
Instead of hitting the brakes on AI innovation, EU policymakers should hit the accelerator on smart AI regulation…Putting a brake on research and innovation in AI will not only deprive society of all its benefits, but also jeopardise the development of many other technologies.
In the past week alone, U.S. Congress members Ted Lieu, Anna Eshoo, and Ken Buck introduced a bill to direct the White House and Congress to create a National Artificial Intelligence Commission, and Senate Majority Leader Chuck Schumer recently elaborated upon his proposed “SAFE Innovation Framework” for AI. Moreover, the Biden-Harris Administration also announced new actions that will “further promote responsible American innovation in artificial intelligence (AI) and protect people’s rights and safety.”
CCIA President Matt Schruers noted the positive steps forward here in the U.S., stating
We appreciate members of Congress taking the lead and gathering information before writing regulations that could impact U.S. leadership in artificial intelligence.
However, Schruers additionally made clear that any eventual AI legislation must be a carefully considered flexible regulatory framework that can encourage innovation and competition while mitigating potential risks.
While governments across the globe are cautiously monitoring AI or already crafting regulations for it, people and businesses across all sectors are making leaps and bounds in AI applications, from AI chatbots, to integrations in Google Search and Microsoft Bing intended to increase search efficiency, to its use in detecting equipment condition to control wildfire risk. Academia is also putting AI to use, not just in research but also in teaching, where AI is attempting to help catch cheaters and even aid in teaching classes. AI has already become an integral part of our lives in fields such as healthcare or security and it is only being further incorporated into more aspects of society, these are just a few notable examples.
In the rapidly advancing landscape of artificial intelligence, responsible development and deployment are paramount. However, it is crucial to balance between regulation and flexibility, avoiding overly prescriptive principles that may stifle innovation. Rather than rushing to create new laws regulating AI, it is essential to evaluate whether existing laws adequately address the concerns posed by AI. The effective application of existing laws to deal with problems presented by artificial intelligence will provide the vast majority of appropriate AI regulation. DisCo has discussed both Section 230 and copyright law (at home and abroad) with regards to artificial intelligence. The effective application of existing laws can serve to properly regulate AI. However, in the circumstances where existing law is not enough or there is the employment of a high-risk AI application, the principles of responsible AI should be considered in designing thoughtful, adaptable regulation that can be applied in all contexts:
- Design for social benefit
- Design to avoid unfair outcomes
- Analyze and minimize risks as you design
- Consider the risks to third parties from AI systems during design, but also the benefits
- Use up-to-date safety, security, and privacy best practices
- Monitor and govern identified risks in deployed systems
- Provide appropriate disclosures for deployed AI systems
These principles, applied in the context of any given system, provide the flexibility to manage in AI applications while ensuring the benefits AI can deliver.
The U.S. must not cut off the streams of nutrients artificial intelligence needs to grow. To do so risks AI withering on the vine in the United States, jeopardizing the development of other technologies that rely on it and the U.S.’s leadership position in innovation. The United States has the opportunity to become a leader in AI but if we overregulate or inhibit innovation, we risk falling behind other countries pushing forward in this sector. AI will bring new challenges, but the benefits AI will bring are incredible, rivaling the changes that computers brought to the world. With smart regulation and governance, AI will lead to a more innovative and prosperous United States. Let’s not kill off the plant that can provide us with food, oxygen, energy, and so much more.