Contact Us


Disruptive Competition Project

655 15th St., NW

Suite 410


Washington, D.C. 20005

Phone: (202) 783-0070
Fax: (202) 783-0534

Contact Us

Please fill out this form and we will get in touch with you shortly.
Close

Fine-Tuning the AI Act: 3 Final Steps Before Europe Can Lead the Way

On Tuesday 18 July, the Spanish EU Council Presidency – representing the 27 Member States – and Members of the European Parliament (MEPs) will have their first round of comprehensive political negotiations on the Artificial Intelligence (AI) Act, marking the final phase of the legislative process.

The EU co-legislators have the ambition to reach an agreement on the world’s first complete set of AI rules in the coming months. Yet, in order to fully unleash AI’s potential and create a future-proof framework, there are (at least) three things that EU lawmakers must get absolutely right. These rules will define how Europeans can develop and use AI for the next decade or more, so quality should definitely prevail over speed.

1.) Focus on the risks of AI applications

During the legislative process, both the EU Council and the European Parliament departed from the risk-based approach originally proposed by the European Commission, by creating specific rules applicable to so-called “general-purpose AI” (GPAI) and “foundation models” at large. And that while these basic building blocks and versatile models as such do not pose any, or only very little, risk. 

Like any other technology, artificial intelligence can be used very positively, but also in negative ways. The Commission’s initial proposal therefore rightly focuses on high-risk uses of AI systems, and not on any specific technologies underpinning those AI use cases. Because regulating a given technology as such hardly makes sense. For one, technologies continue to evolve over time, and we simply don’t know yet what the next leap in AI will bring. Rules that focus on a specific technology therefore simply will not stand the test of time. 

Moreover, like any other tool, it is not the technology itself that poses a risk, it’s all about how it is used. If we take the example of a brick, which can be used to build hospitals but also be thrown through a window, everybody agrees there is no reason to regulate or prohibit the brick itself, but rather its wrong use. The same should apply to AI, if common sense prevails that is.

Unfortunately, MEPs decided to apply very strict requirements to specific types of highly innovative technologies, such as AI that is able to generate text, images, audio and video, also-known as generative AI. As pointed out in a recent letter by leading European business executives, the rules proposed by Parliament would heavily regulate foundation models “regardless of their use cases, […] and companies developing and implementing such systems would face disproportionate compliance costs and disproportionate liability risks.” 

This warning should be taken seriously. EU lawmakers really need to focus exclusively on the high-risk uses of such systems if they want to preserve Europe’s innovation potential while preventing risks. In the same vein, last minute suggestions from the Parliament to add copyright requirements to the AI Act will make the lives of European AI developers more complicated, as I already explained in a dedicated article on the issue. 

This is not to say that MEPs did not improve the initial AI Act. On the contrary, it made some significant improvements to the allocation of responsibilities along the complex AI value chain, which comprises many different actors. The Parliament rightly exempts GPAI systems from any additional rules under the AI Act, and only obliges providers of GPAI to assist downstream deployers (or users) of their systems. This is the right approach, as deployers are best placed to comply with the Act’s obligations, not the providers. 

It can be reasonably argued that the EU Council made a considerable faux pas by subjecting these essential systems – the actual building blocks of the AI ecosystem – to the AI Act’s high-risk requirements. GPAI systems should, as every other AI system, only be subject to stringent obligations if they are used in a high-risk context.

Now, if we take a step back and look at the bigger picture, it is clear that while both the Council and Parliament positions on GPAI and foundation models need to be improved, they are not irreconcilable. The obvious way forward is to take the best of both worlds, by making sure that GPAI requirements only apply to high-risk uses, as suggested by the EU Council, and by streamlining responsibilities along the value chain, as the European Parliament has proposed.

2.) Avoid unnecessary red tape and regulatory overlap

When asked, everybody will agree that unnecessary bureaucracy, red tape, and regulatory overlap should be prevented at all cost. However, in practice that is easier said than done, especially when we look at the proposals that are currently on the table. For example, MEPs added a whole new level of complexity to the AI Act by referring to, or directly including, rules from – hold your breath – the Digital Services Act (DSA), Digital Markets Act (DMA), General Data Protection Regulation (GDPR), as well as the new regulation on political advertising and the Platform Work Directive (PWD) that are still being debated. And this without proper justification. 

Needless to say, EU lawmakers should keep this framework’s focus on AI and stay clear of adding cross-references to other rules if they want to create a legal framework that is truly workable. In this light, very large online platforms’ recommender systems should not classify as high-risk, as the DSA already comprehensively regulates them. Neither should gatekeepers be obliged to publicly register all their AI systems simply because of the size of their company. And no, political advertising services should also not be regulated under the AI Act when a dedicated framework is being debated in parallel. The EU Council and the Parliament need to remove these overlapping rules from the Act during the negotiations.

However, both Parliament and Council did make useful improvements to the AI Act’s infamous Article 6, which determines whether an AI system should be considered high-risk if it falls under one of the use cases listed in Annex III, or not. In this case the two approaches adopted by the co-legislators actually are complementary.

While the Council says that AI generating output that is purely accessory (i.e. no high degree of importance), in respect of the relevant action or decision presenting a potential risk, cannot be classified as high-risk, the Parliament’s position states that only AI posing a significant harm to the health, safety, or fundamental rights of people can classify as such. These useful clarifications should remain part of the final AI Act, provided that the Annex III use cases are clearly defined and sufficiently targeted. This last point is important, because broadly defined risk areas could very much nullify any meaningful improvements that are made to Article 6. 

Parliament does want to add unnecessary red tape to Article 6, by requiring all developers to notify national authorities whenever they conclude a new system is not high risk. National authorities simply don’t have the capacity to process a massive stream of notifications, and companies already face significant fines under the AI Act if they misclassify their systems. 

Nonetheless, any measures that help companies in assessing whether their systems are high-risk or not are welcome generally speaking. Making this procedure purely voluntary would be a step in the right direction for instance. The same applies for any non-binding guidelines by the Commission that help companies in assessing their own systems, which would be very useful and provide additional certainty for companies.

3.) Prevent the unintentional banning of legitimate AI applications 

A likely sticking point in the negotiations will be the list of prohibited AI systems, which the European Parliament wants to expand considerably. The idea behind suggesting these bans is, of course, well-intended. As always, however, the devil is in the details. The longer the list and the more definitions, the greater the risk of unintended consequences. In many cases, the prohibitions that are now being considered would actually affect many legitimate AI applications. 

The Parliament, for example, proposes to ban biometric categorisation systems that Parliament wants to introduce. While MEPs seem to think this would rule out the discriminatory categorisation of people, this blanket ban actually risks missing its target and being counterproductive. Indeed, these systems have already proven to be useful to infer the age of users in order to protect children online and to fight the dissemination of child sexual abuse material (CSAM). But they can also detect deep fakes, enhance accessibility, and mitigate bias in datasets. 

Likewise, the proposed ban on emotion recognition systems would prohibit systems that help to protect the health and safety of people. Think, for example, of AI-powered systems that can identify when the driver of a car dozes off, subsequently activating an alarm to wake them and thus prevent an accident from happening. In order to avoid unintended consequences of this nature, which risk banning legitimate AI applications, EU lawmakers will need to identify the most harmful AI use cases and based on those design a clear and limited list of prohibited AI systems.

If they make sure to include these three key ingredients, the EU co-legislators have everything needed for the best AI regulation recipe in the world. Let’s hope that Europe will lead the way by striking the right balance between trust and innovation in AI.

European Union

DisCo is dedicated to examining technology and policy at a global scale.  Developments in the European Union play a considerable role in shaping both European and global technology markets.  EU regulations related to copyright, competition, privacy, innovation, and trade all affect the international development of technology and tech markets.