EU Lawmakers Need to Unlock AI’s Potential, Here’s How
With ChatGPT and other innovative AI tools making waves, developments in the field of artificial intelligence (AI) continue to attract a lot of attention. There are people calling it an “intellectual revolution,” while others are fearful. Some EU lawmakers are now suddenly considering all kinds of changes to Europe’s first-ever AI Act, driven by what’s making headlines today rather than thinking towards the future.
The truth is that AI technology is still very much in its infancy. But in the not-too-distant future, chatbots will provide fast and accurate responses to your queries and AI tools will instantly create original images or videos for your work presentation. With AI most things in life – whether it is work-related, chores around the house, or your hobbies – will become considerably easier, more efficient, and safer.
EU lawmakers are currently finalising negotiations on the AI Act. Here are three things they need to get right if they are to address risks while at the same time unlocking the full potential of AI for Europe.
1.) Promote safe use of generative AI
A well-known example of so-called “generative AI” is the ChatGPT chatbot, but the term basically covers any AI system that can be used to create new content – such as text, video, and images – or answer complex questions. These systems are a subcategory of general purpose AI (GPAI) systems, which are essential building blocks able to perform a variety of tasks. Generative AI has the potential to revolutionise the way we use the Internet, impacting how we work, produce, consume, and create online.
The media frenzy surrounding generative AI has also caught the attention of EU lawmakers here in Brussels, and some now want to add rules for generative AI to the Act. However, instead of putting obstacles in the way of innovation, they should encourage the safe and responsible use of these AI systems.
Members of the European Parliament should not give in to the temptation to introduce stringent requirements for generative AI, at least not without first conducting robust regulatory and economic impact assessments. Generative AI was not part of the Commission’s initial proposal, so including it would be a big gamble.
It would also mark a major departure from the AI Act’s original approach, which seeks to address risks only in relation to the intended purpose of the system. While AI systems intended to manage the safety of air traffic, for example, arguably present a high risk, general-purpose AI systems are neutral and versatile by nature. They pose no specific danger, and in many cases none at all.
As for any other technology, the greatest risks lie not in the AI system itself, but in its usage. So, instead of imposing strict requirements on the developers of these systems – who cannot reasonably assess and mitigate all possible risks, as they simply don’t know how their products will be used – lawmakers should put this responsibility on the users of GPAI. After all, they will apply and use those systems according to their own specific needs.
If the EU wants to become an incubator for the next ChatGPT, it must allow innovation to thrive rather than putting a break on the development and use of generative AI systems.
2.) Avoid (unintentionally) banning legitimate practices
One of the AI Act’s main objectives is to ban certain unacceptable practices that represent a clear threat to the safety, livelihoods, and rights of Europeans. In theory, this sounds like an excellent idea of course, but to make it work in practice lawmakers first need to define very clearly to what kind of practices this would apply to avoid unintentionally banning legitimate and useful commercial practices. For example, recent proposals by MEPs to ban the use of “subliminal techniques” or “scoring systems” risk inadvertently capturing advertising or the use of systems to detect fraud. To prevent this from happening, lawmakers should agree on clear and objective criteria that only capture the most harmful practices.
3.) Apply stringent requirements to AI that is truly high-risk
Systems deemed to present a high risk of harm will be subject to strict rules under the new Act. In order to promote innovation and AI uptake by businesses and citizens, lawmakers should make sure that the most stringent requirements apply to AI systems that are truly high-risk. That is, those systems posing a significant risk of harm to the health, safety, or fundamental rights of people. This will only work if the AI Act is accompanied by a list of specific, clearly-defined use cases that are deemed to be high risk in nature.
AI can considerably improve efficiency and safety at work, for example by allocating tasks on the basis of objective criteria or by monitoring burdensome and dangerous activities to protect employees’ safety and health. Some lawmakers now want to impose stringent requirements on such systems, but they have failed to single out the concrete high-risk applications the requirements should apply to. On the other hand, AI used for entertainment purposes, such as voice recognition, poses little danger and doesn’t require strict requirements at all.
In conclusion, EU lawmakers clearly aspire to be the first in the world to adopt a legislative framework dedicated to AI, but speed is not the only thing that counts. First and foremost, Europe must create a framework that truly encourages innovation and the uptake of AI. As more innovative AI tools are to come, the European Union must make a choice: promote and spearhead the AI revolution, or severely limit the use of AI and risk being left behind.