AI Fortress Europe?
The European Commission is considering mandating a bureaucratic wall for the development and use of Artificial Intelligence in Europe. This would reinforce Europe’s laggard position in the global AI race.
The European Commission is expected to set out its future approach to AI in a few weeks time and its ambitions are high. The Commission reportedly wants the EU to “become a global leader in innovation in the data economy.”
The EU executive is considering a number of sensible ideas such as boosting investments, prioritising education and upskilling, and cooperating with its global partners.
Importantly, the Commission wants to improve public trust in AI to make sure existing EU legislation remains fit for purpose (a lot of it, such as GDPR, already applies to AI but legal clarity would be useful). So far so good.
More problematically, the Commission is entertaining the idea of creating a bureaucratic wall that would significantly delay and block many AI applications from ever benefiting Europeans.
The Commission is considering setting up an elaborate “ex ante conformity assessment” system for testing AI applications before they can be introduced to the EU market. In theory this would only include so-called “high risk” AI applications. However, the definition is so broad that it would include a long list of applications.
How would this EU system work in practise? Imagine that a university or a company invents an AI application that can predict and limit the spread of a deadly virus. This type of public health AI application would be considered “high risk” and thus require a conformity assessment to be carried out in the EU before it can be used in Europe.
The EU doesn’t have any AI assessment centers and it has a severe IT skills shortage. The AI testing procedure would therefore likely be lengthy and severely slow down many AI innovations from ever being introduced in Europe.
Imagine you run a European startup and you are racing to be the first to launch a novel AI application. While your competitors outside the EU can immediately use their services abroad, you would have to wait months for approval to serve the EU market. The Commission proposal urges European startups to relocate to the US or elsewhere if they want to play in the global AI innovation race.
The Commission wants the testing of AI applications to be handled by the EU Member States. However, given their lack of AI expertise, it is likely that EU Member States would outsource the testing to national companies with AI expertise. This would create obvious conflicts of interest where innovators have to beg their competitors to test and approve their competing AI applications. It also raises questions related to the protection of trade secrets, intellectual property, privacy, and security.
Additionally, the Commission wants AI systems to have been developed using European or equivalent data sets. Non-EU systems would have to undergo an additional lengthy and costly “retraining” in Europe with pure European data sets. Somehow, excluding global data would make AI applications less biased (even though more than 90% of the world’s population live outside the EU…)
Clearly, a bureaucratic “AI Fortress Europe” runs counter to the European Commission’s ambition of becoming a global leader in the data economy. It would be the easiest way to ensure that Europe lags behind in the global AI race and becomes a spectator rather than an innovator.
The Commission still has the opportunity to tweak its AI approach to make sure Europe remains innovation-friendly while ensuring proportionate consumer safeguards.