Generative AI apps like ChatGPT are raising concerns about the impact of artificial intelligence on a range of issues including disinformation as well as copyright over images, sound and text
Brussels (AFP) - The EU and Google want voluntary rules on AI before a new law comes into force in the bloc to regulate the rapidly advancing technology, the bloc’s industry commissioner Thierry Breton said Wednesday.
“We agreed that we cannot afford to wait for the AI law to come into force and to work together with all AI developers to introduce a voluntary pact,” Breton told AFP after holding talks with Google CEO Sundar Pichai in Brussels.
Although the European Union’s executive arm first proposed a law to regulate AI in 2021, the issue has taken on greater urgency since ChatGPT, a chatbot created by OpenAI in the United States, burst onto the scene with all its dizzying developments.
The European Parliament is due to back the draft law next month before negotiations formally begin with the EU’s 27 member states to agree on a final version.
The EU is racing to be the first to regulate the risks that come with AI’s deployment.
Breton said that even if the EU adopted the law by the end of the year, it would start to apply “at the earliest by the end of 2025”.
The concerns over AI are an ever-growing list, from disinformation to copyright over images, sound and text.
Breton added that he wanted to engage a “large number of players, whether European or non-European” to discuss the voluntary rules.
“We already see some general rules. Many things could be implemented without going through the law,” he said, giving examples including ensuring that AI-generated images contain labels saying they were produced by AI.
The EU parliament’s text included bans on biometric surveillance, emotion recognition and AI predictive policing systems.
It also seeks to put generative AI systems such as ChatGPT and Midjourney in a category requiring special transparency measures, such as notifications to users that the output was made by a machine, not a human.
Some tech firms have welcomed regulation.
Last week, OpenAI’s CEO Sam Altman testified before a US Senate panel and called on Congress to impose new rules on big tech to mitigate the dangers that can arise from AI.
The G7 group of nations last week also announced they would launch discussions this year on “responsible” use of the technology with a working group to tackle issues from copyright to disinformation.
European Commission Vice President Margrethe Vestager said on Tuesday that officials from the United States and the EU would discuss the issue at an EU-US Trade and Technology Council (TTC) meeting in Sweden next week.
“We can talk about this within the TTC in a way that will help the G7 process to be as concrete as possible,” she told reporters.