European companies are believing the “absolute lie” that the EU AI Act is killing innovation, Carme Artigas, co-chair of the United Nations advisory board on artificial intelligence, has warned.
“We are losing the battle of the narrative,” Artigas said last week at the Europe Startup Nations Alliance forum.
As Spain’s AI minister, Artigas led negotiations on the AI Act in the EU Council. She denounced accusations that the act has resulted in the over-regulation of digital technologies and that it is pushing companies to set up abroad.
That narrative “is not innocent at all”, she said. It has been “promoted by the US – and our start-ups are buying that narrative.”
“What is the end game of this narrative? To disincentivise investment in Europe and make our start-ups cheaper to buy,” said Artigas.
In his report on EU competitiveness, Mario Draghi says the ambitions of the AI Act are “commendable”, but warns of overlaps and possible inconsistencies with the General Data Protection Regulation (GDPR).
This creates a risk of “European companies being excluded from early AI innovations because of uncertainty of regulatory frameworks as well as higher burdens for EU researchers and innovators to develop homegrown AI”, the report says.
But for Artigas, the main objective of the legislation is “giving certainty to the citizens to enable massive adoption.” As things stand, “The reality is nobody is using AI mainstream, no single important industry.”
Lucilla Sioli, head of the European Commission’s AI Office, set up to enforce the AI Act and support innovation, agreed companies require certainty that consumers will trust products and services using AI. “You need the regulation to create trust, and that trust will stimulate innovation,” she told the forum.
In 2023, only 8% of EU companies used AI technologies. Sioli wants this to rise to three quarters.
She claimed the AI Act, which entered into force on 1 August, is less complicated than it appears and mainly consists of self-assessment.
The AI Act is the world’s first binding law of its kind, regulating AI systems based on their risk. Most systems face no obligations, while those deemed high-risk must comply with a range of requirements including risk mitigation systems and high-quality data sets. Systems with an “unacceptable” level of risk, such as those which allow social scoring, are banned completely.
Even for high-risk applications, the requirements are not that onerous, Sioli said. “Mostly [companies] have to document what they are doing, which is what I think any normal, serious data scientist developing an artificial intelligence application in a high-risk space would actually do.”
The Commission needs “to really explain these facts, because otherwise the impression is the AI Act is another GDPR, and in reality, it affects only a really limited number of companies, and the implementation and the compliance required for the AI Act are not too complicated,” said Sioli.
Kernel of truth
Holger Hoos, a founder of the Confederation of Laboratories for Artificial Intelligence Research in Europe, agreed it is in the interests of US tech companies to promote a narrative that Europe is stifling innovation in AI.
“They know Europe has lots of talent, and every so often they buy into companies using this talent, Mistral being the best example,” he told Science|Business.
Nevertheless, there is a “kernel of truth” to this narrative. “We’re in the early phases of implementation of the AI Act, and I believe there are reasons to be concerned that there is a really negative impact on certain parts of the AI ecosystem,” Hoos said.
[…]
Source: EU is ‘losing the narrative battle’ over AI Act, says UN adviser | Science|Business
Yes, the negative impact is towards people who want to do risky stuff with AI. Which is a Good Thing ™
Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft