Contact Us


Disruptive Competition Project

655 15th St., NW

Suite 410


Washington, D.C. 20005

Phone: (202) 783-0070
Fax: (202) 783-0534

Contact Us

Please fill out this form and we will get in touch with you shortly.
Close

AI Act: Why the EU needs to think BIG if it wants to lead the way on artificial intelligence

Tags

After a well-deserved summer break, EU lawmakers are now back at work and aim to reach an agreement on the landmark Artificial Intelligence (AI) Act before the end of this year. The stakes are high, but the EU still has a clear choice: it can either set a strong example for the world on how to best regulate AI, or it risks falling behind by imposing rules so stringent they would stifle innovation. 

At the moment, there is growing consensus that the requirements in the AI Act are too burdensome for companies, dealing a setback to European AI developers right from the start. Yet, some are now suggesting that the solution would be to apply the most stringent requirements only to the largest AI companies, while subjecting all others to a set of much less cumbersome rules. In other words, asymmetrical regulation.

What they propose is a differentiation for the rules on so-called “foundation models”, very capable models that are trained on large amounts of data and which subsequently can be used for a wide range of downstream applications. Some of these foundation models are commonly referred to as “generative AI” – because they can generate content, such as text, video, images, audio and computer code, from scratch.

However, this kind of thinking also begs a very fundamental question: does the EU really want to incentivise the development of unsophisticated AI models by being much tougher on the most capable, and arguably the safest, foundation models? If the EU wants a thriving AI ecosystem of its own, policymakers need to define sensible rules that are applicable to all, based on concrete risks, instead of who develops AI.

1.) Thriving and highly competitive ecosystem for foundation models

One of the underlying assumptions of the aforementioned authors weighing in on this debate, is that the EU is lagging behind other regions and that leading AI companies are already out of reach anyway. This pessimistic mindset is just self-defeating. It does not account for the thriving AI ecosystem we already have in Europe, nor the ambitious plans presented by many home-grown companies in recent months. 

What about French company Mistral AI, which wants to “develop the best generative AI models”, or Germany’s Aleph Alpha that strives to become “the leading European company researching and creating next-generation strong artificial intelligence”? What about Hugging Face, Stability AI, and Synthesia? The list goes on. The EU should think as big as its own companies if it wants to be at the forefront of innovation, and not just in terms of regulation. 

Beyond the EU, competition in the generative AI market is fierce, and the rapid succession of new technological developments is constantly changing the rules of the game. It must not be overlooked that, aside from media favourite OpenAI’s GPT-3.5 and GPT-4 models, a myriad of other advanced models is currently available on the market. From Anthropic’s Claude to Google’s PaLM2, Meta’s Llama2, Stability AI’s Stable Diffusion, Midjourney’s models, and Cohere’s Command, only to name a few. 

Regulation cannot, and should not, be the result of mere assumptions. It has to be based on facts, because otherwise new rules risk missing their target completely. Bruegel’s Christophe Carugati notes in a recent paper that the markets for foundation models are “competitive, with multiple providers and degrees of openness thanks to several closed- and open-source models, open-source and proprietary data, and vigorous competition between firms at the computing resources level, despite high degrees of concentration in some of these markets.” 

Carugati underlines that these “market characteristics ensure that [foundation model] developers face low or surmountable entry barriers”. While there may be reasons to monitor these markets, it is crystal clear that regulatory intervention at such an early stage is neither appropriate nor desirable. 

It must also be stressed that the segment’s market conditions tend to evolve as rapidly as technology develops. The cost of computing resources, for example, is often cited as an important barrier to entry. However, the latest innovations in graphic processing units (GPUs) are likely to significantly decrease the costs of training and maintaining foundation models in the near future. The same applies to the model size, number of parameters and amount of data required to reach state-of-the-art capabilities. Recent research shows that smaller models can outperform much larger ones (i.e. 500 times bigger) in a significant number of tasks. 

In this respect, many of the criteria or pre-market parameters to identify and differentiate between models, such as computing resources, adaptability and capability, are not set in stone and simply will change over time. Other criteria, such as the amount of investment in the model, do not seem relevant either in determining whether or not a model presents systemic risks. Here again, the costs of training and maintaining a model tend to fall over time with further advances in technology. AI pioneers are also likely to open the door to other, less well-resourced companies – after first investing large sums and sometimes incurring considerable losses.

2.) No room for asymmetrical rules in the AI Act’s scope

The AI Act is, in essence, actually part of the EU’s product safety framework. As for any other product, it sets out rules applicable to all AI systems, depending on the risk they present. The fact that the cost of a product – in this case a foundation model – exceeds a certain threshold should not change the regulatory approach to the AI Act. The same applies for the number of downloads or users of a model. In other words, a product is either safe and can be placed on the market, or it is not. It is worth noting that the AI Act’s approach to product safety has not been altered by EU lawmakers, and rightly so.

As it stands, there simply is no room for any differentiation between “smaller” and “larger” foundation models in the AI Act. EU policymakers will therefore need to define sensible rules for developers of such systems, based on risk alone. And this also perfectly makes sense. A small or less-capable AI foundation model can do at least as much harm as a larger model, if not even more. 

Leading AI companies are by default under constant scrutiny – whether from regulators, legislators, the media, or the public – simply because they are the most successful ones. Their interest lies in developing the most efficient, but also the safest, models and applications. The mere risk of reputational damage in such a competitive environment is enough for these leading companies to massively invest in the safety of users. 

In this context it simply makes no sense to regulate only the most successful and innovative AI companies, while leaving niche or lesser-known models without any form of oversight. The real risks of misuse of AI systems are likely to come exactly from those companies and players that are not subject to any form of, or much less, scrutiny.

That is why the AI Act’s rules must be proportionate and apply to all AI companies wishing to offer services in the EU, as originally intended.

3.) Asymmetrical regulation would be disastrous

Last but not least, the consequences of asymmetrical regulation would be disastrous for the European Union in many respects. Not only would this encourage the development of less capable models in Europe, it would also prompt European talent and companies to move to greener pastures to develop cutting-edge models over there. 

If the EU framework encourages companies to invest less money in less efficient models, in order to target fewer downloads and users, such an exodus obviously would only make sense. Europe has immense talent and ambitious companies that must not be let down. So instead, the EU’s AI Act should be used as an opportunity to create a regulatory environment that truly fosters innovation.

What’s more, given that the EU wants to set an example and hopes to export its model of AI regulation to other countries around the world, it would be unreasonable to discriminate unfairly against their businesses without justification. The EU is very involved in international fora such as the G7 and the OECD, as well as in the ongoing work on AI of several of its allies, including the United States. Maintaining this open attitude and these relationships will partly determine the success of the AI Act at international level.

In conclusion, EU lawmakers will be busy in the coming months to define the rules that will determine how AI will be developed and used in the EU over the next decade, and discussions on asymmetric rules should not be part of that. The focus should remain on the best way to strike the right balance between trust and innovation in AI. In order to achieve this, EU decision-makers will need to show as much vision and ambition as European tech startups and businesses are displaying right now.

European Union

DisCo is dedicated to examining technology and policy at a global scale.  Developments in the European Union play a considerable role in shaping both European and global technology markets.  EU regulations related to copyright, competition, privacy, innovation, and trade all affect the international development of technology and tech markets.