Montage image of the EU flag  and faces with squares over them, representing facial recognition
The EU’s AI Act will help to prevent the erosion of civil liberties in areas such as facial recognition, but start-ups in the bloc worry it could put them at a competitive disadvantage © FT montage/Dreamstime

The conjugation of modern technology tends to go: the US innovates, China emulates and Europe regulates. That certainly appears to be the case with artificial intelligence.

After months of intense lobbying, and almost 40 hours of overnight wrangling, exhausted EU policymakers delivered the AI Act last Friday evening and celebrated the world’s most comprehensive legislation governing the transformative technology. It will still take many months before the final wording is agreed and the law is adopted by national legislatures and the European parliament. But snorts of derision quickly followed from across the Atlantic.

Anand Sanwal, chief executive of the New York-based data company CB Insights, wrote that the EU now had more AI regulations than meaningful AI companies. “So a heartfelt congrats to the EU on their landmark AI legislation and continued efforts to remain a nothing-burger market for technology innovation. Bravo!”

European legislators would shoot back that even the big US AI companies, including Google, Microsoft and OpenAI, accept that regulation is essential and that the EU is determined, however imperfectly, to tackle one of the biggest governance challenges of our age. Good luck in passing any comparable federal legislation in the US given the political gridlock in Washington. 

Moreover, the act undoubtedly contains some valuable constraints on the use of AI. The civil libertarian in me applauds the ban on the indiscriminate use of facial-recognition technology, social scoring and predictive policing (even if member states did carve out some exemptions for national security concerns). Citizens of most countries would rebel if forced to provide fingerprints while out shopping every Saturday afternoon. Why should AI-enabled facial-recognition technology be any different? 

For good reason, the act also compels companies to flag whenever a user interacts with a chatbot and to label AI-generated synthetic content to prevent deception. With luck, this may help curb the spread of disinformation, such as that which has recently been marring elections in Bangladesh. Citizens will also be able to raise abuses of the technology with a newly created European AI Office. Regulators will have the power to fine miscreants up to 7 per cent of their global turnover.

But, as ever, the worry about regulation is whether it will stifle upstream innovation. To some extent, of course, that is the aim. At a time when many AI researchers are calling for a pause in the development of frontier models, the legislators’ intention is to prevent harmful innovation. The trickier question is: will it also inadvertently hamper the good? Unfortunately, the answer is probably yes. 

The French and German governments had already diluted earlier drafts of the act to help restrict the testing and transparency requirements on their national AI champions, Mistral and Aleph Alpha. Even so, in a speech in Toulouse this week, France’s President Emmanuel Macron expressed unhappiness with the messy compromise. “We can decide to regulate much faster and much stronger than our major competitors. But we will regulate things that we will no longer produce or invent. This is never a good idea,” he said. 

Some European start-ups, including Aleph Alpha, have said they could now be at a competitive disadvantage and will have to spend more on lawyers and less on software engineers.

When I asked Nathan Benaich, an investor at London-based VC firm Air Street Capital, whether he would be more or less likely to invest in EU-based AI companies as a result of the act, he replied: “Less likely.” As it is, Air Street has invested 86 per cent of its AI-focused funds in the US, UK and Canada with less than 10 per cent going into EU-based start-ups. “It is undeniable that entrepreneurs are feeling the energy of San Francisco again,” Benaich says.

As well as worrying about the risks of AI, the EU should be far more active in promoting the technology’s productive possibilities, especially in areas such as healthcare, education and energy transition. To that end, the EU should work with universities, industry and investors to create a “Europe stack” that would strengthen its research and development capabilities and build public digital infrastructure, backed by a €10bn digital sovereignty fund, suggests Francesca Bria, former president of Italy’s National Innovation Fund. 

It would be reckless to rely on, and just seek to redirect, the Silicon Valley approach to developing the technology. Europe needs to learn to love AI and invent an alternative future of its own.

john.thornhill@ft.com

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments