The nonpartisan think tank Brookings this week published a piece decrying the bloc’s regulation of open source AI, arguing it would create legal liability for general-purpose AI systems while simultaneously undermining their development. Under the EU’s draft AI Act, open source developers would have to adhere to guidelines for risk management, data governance, technical documentation and transparency, as well as standards of accuracy and cybersecurity.
If a company were to deploy an open source AI system that led to some disastrous outcome, the author asserts, it’s not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product.
“This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public’s understanding of AI,” Alex Engler, the analyst at Brookings who published the piece, wrote. “In the end, the [E.U.’s] attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of general-purpose AI.”
[…]
In a recent example, Stable Diffusion, an open source AI system that generates images from text prompts, was released with a license prohibiting certain types of content. But it quickly found an audience within communities that use such AI tools to create pornographic deepfakes of celebrities.
[…]
“The road to regulation hell is paved with the EU’s good intentions,” Etzioni said. “Open source developers should not be subject to the same burden as those developing commercial software. It should always be the case that free software can be provided ‘as is’ — consider the case of a single student developing an AI capability; they cannot afford to comply with EU regulations and may be forced not to distribute their software, thereby having a chilling effect on academic progress and on reproducibility of scientific results.”
Instead of seeking to regulate AI technologies broadly, EU regulators should focus on specific applications of AI, Etzioni argues. “There is too much uncertainty and rapid change in AI for the slow-moving regulatory process to be effective,” he said. “Instead, AI applications such as autonomous vehicles, bots, or toys should be the subject of regulation.”
[…]
Source: The EU’s AI Act could have a chilling effect on open source efforts, experts warn | TechCrunch
Edit 14/9/22: Willy Tadema has been discussing this with the NL.gov people and points out that Axel Voss has introduced exemptions into the act:
Last week, the Legal Affairs committee in the European Parliament adopted my opinion on the #AIAct with strong support. 17 votes in favor, one against.
Focusing on 10 key areas within the competence of the JURI committee, we send a strong signal to the lead committees, LIBE and IMCO while also presenting new ideas for the political debate on #AI.
On the scope (Art. 2), we introduce three new exemptions.
– On research, testing, development to promote innovation in AI,
– On Business to Business (B2B) to avoid regulating non-risky industrial applications,
– On open-source until its commercialization to support small market players.We also adjusted the responsibilities of providers (Art. 16) as well as users (Art. 29) as regards their supply chain. In addition, we specified under what circumstances those responsibilities might shift to another actor (Art. 23a) and we tried to integrate general purpose AI into the AI Act.
The JURI committee also transformed the AI Board into a powerful EU body with own legal personality and strong involvement of stakeholders which would help to better coordinate among Member States and to keep AI Act up to date.
As usual, I have to thank Kai Zenner for his tireless work and the great result!
Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft