(Reuters) – OpenAI’s CEO, Sam Altman, announced that the company has no intentions of leaving Europe, retracting a previous threat made earlier in the week regarding their departure if compliance with upcoming artificial intelligence laws becomes too burdensome.
The European Union (EU) is in the process of developing what could be the first global set of regulations for governing AI. Altman had previously criticized the current draft of the EU AI Act, referring to it as “over-regulating.”
In a tweet, Altman stated, “We are excited to continue to operate here and of course have no plans to leave.”
His initial threat had garnered criticism from Thierry Breton, the EU’s industry chief, as well as several other lawmakers.
Over the past week, Altman has been engaging in discussions with top politicians in France, Spain, Poland, Germany, and the United Kingdom, focusing on the future of AI and the progress of ChatGPT.
Altman described his European tour as a “very productive week of conversations in Europe about how to best regulate AI!”
OpenAI had previously faced backlash for not disclosing training data for its latest AI model, GPT-4. The company cited concerns about the competitive landscape and safety implications as reasons for withholding the details.
During the debates surrounding the AI Act draft, EU lawmakers introduced new proposals that would require companies utilizing generative tools like ChatGPT to disclose any copyrighted material used to train their systems.
Dragos Tudorache, a Romanian member of the European Parliament leading the drafting of EU proposals, explained, “These provisions mainly pertain to transparency, which ensures the AI and the company developing it are trustworthy. I see no reason why any company would shy away from transparency.”
CONFLICT WITH REGULATORS
The draft of the AI Act was agreed upon by EU parliamentarians earlier this month. Member states, the European Commission, and Parliament will collaborate to finalize the bill later this year.
AI-powered chatbot ChatGPT, backed by Microsoft, has opened up new possibilities in the field of AI, prompting both excitement and alarm, and putting it at odds with regulators.
In response to Altman’s tweet, Dutch MEP Kim van Sparrentak, who has been closely involved in drafting AI rules, emphasized the need for resilience against pressure from tech companies. She stated, “I hope we continue standing firm, and we will ensure these companies have to follow clear obligations on transparency, security, and environmental standards. Voluntary codes of conduct are not the European way.”
OpenAI previously clashed with regulators in March when the Italian data regulator, Garante, shut down the app domestically, accusing OpenAI of violating European privacy rules. ChatGPT was later reinstated after the company implemented new privacy measures for users.
German MEP Sergey Lagodinsky, also involved in the AI Act draft, expressed his satisfaction with the decision, stating, “I’m happy to hear that we don’t have to resort to threats and ultimatums. We all face common challenges, but the European Parliament is an ally for AI, not an enemy.”
On Thursday, OpenAI announced that it would provide 10 equal grants from a $1 million fund for experiments aimed at determining how AI software should be governed. Altman referred to these grants as a means to “democratically decide on the behavior of AI systems.”