EU industry chief Thierry Breton, in a senior Brussels official’s first comments on the app, said new proposed artificial intelligence rules will aim to address concerns about the risks associated with the ChatGPT chatbot and AI technology.
Just two months after its launch, ChatGPT — which can generate articles, essays, jokes, and even poetry in response to prompts — was rated as the fastest-growing consumer app in history.
Some experts have expressed fears that systems used by such apps could be misused for plagiarism, fraud and the spread of misinformation, although artificial intelligence advocates hailed this as a technological leap.
Mr Breton said the risks of ChatGPT – the idea of OpenAI, a private company backed by Microsoft – and AI systems, underscored the urgent need for rules he proposed last year to set the global standard for the technology. The rules are currently being discussed in Brussels.
“As ChatGPT has shown, AI solutions can offer great opportunities for businesses and citizens, but they also pose risks. That’s why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data,” he told Reuters in written comments.
Microsoft declined to comment on Mr Breton’s testimony. OpenAI – whose app uses a technology called generative AI – did not immediately respond to a request for comment.
OpenAI has stated on its website that it aims to produce artificial intelligence that “benefits all of humanity” while attempting to build safe and useful AI.
Under the EU draft regulations, ChatGPT is considered a general-purpose AI system that can be used for multiple purposes, including high-risk ones such as selecting candidates for jobs and checking credit scores.
Breton wants OpenAI to work closely with downstream developers of high-risk AI systems to enable their compliance with the proposed AI law. “The mere fact that generative AI is newly included in the definition shows the speed at which the technology is evolving and that regulators are struggling to keep up with this pace,” said a partner at a US law firm .
According to executives from several companies involved in artificial intelligence development, companies fear their technology will be placed in the “high risk” AI category, which would lead to stricter compliance requirements and higher costs.
A survey by industry association AppliedAI found that 51 percent of respondents expect their AI development activities to slow down as a result of the AI law.
Effective AI regulations should focus on the highest-risk applications, Microsoft President Brad Smith wrote in a blog post on Wednesday.
“There are days when I’m optimistic and moments when I’m pessimistic about how humanity will use AI,” he said.
Mr Breton said the European Commission is working closely with the EU Council and European Parliament to further clarify the rules in the AI Act for general-purpose AI systems.
“People need to be informed that they are dealing with a chatbot and not a human,” he said.