Access to the AI chatbot, ChatGPT, has been restored in Italy.
The AI was temporarily banned at the beginning of April by the Italian data protection authority, Garante, citing privacy concerns.
OpenAI, the maker of Microsoft-backed ChatGPT, has revealed in a letter to Garante that it has successfully "addressed or clarified" the issues that were raised.
we’re excited chatgpt is available in ???????? again!— Sam Altman (@sama) April 28, 2023
Last month, Italy became the first Western country to block access to the popular AI language model after it launched an investigation into its suspected violations.
In response to that, OpenAI has now made its privacy policy accessible to ChatGPT users before registration and introduced a tool to verify the age of users in Italy.
Furthermore, the EU registration process also includes new form for users to exercise their right to object to its use of personal data to train its models has been provided.
#OpenAI sent a letter to the Italian SA describing the measures it implemented in order to comply with the order issued by the #GarantePrivacy on 11 April. Based on these improvements, the US-based company reinstated access to #ChatGPT for Italian users https://t.co/xeZahEdsVj pic.twitter.com/hO9SZGah34— Garante Privacy (@GPDP_IT) April 28, 2023
Accusations
Garante had accused OpenAI of failing to check the age of ChatGPT's users who are supposed to be aged 13 or above.
As a result, OpenAI said it would offer a tool to verify users' ages in Italy upon sign-up.
In response to the changes, Garante told the BBC that it "welcomed the measures OpenAI implemented," but it still called for more compliance, including an age verification system and planning and conducting an information campaign.
It has also specifically requested OpenAI to plan and conduct an information campaign to inform Italians of their right to opt-out from the processing of their personal data for training algorithms.
#ChatGPT The Italian SA imposed an immediate temporary limitation on the processing of Italian users’ data by #OpenAI, the US-based company developing and managing the platform. An inquiry into the facts of the case was initiated as well https://t.co/1ipz68unnI #GarantePrivacy pic.twitter.com/L8aWqOkmLD— Garante Privacy (@GPDP_IT) March 31, 2023
What is happening in other countries?
The UK and the EU are taking different approaches to regulate artificial intelligence (AI) with the UK focusing on principles and the EU proposing restrictive laws.
The UK proposals outline some key principles for companies to follow when using AI in their products, including safety, transparency, fairness, accountability and contestability.
The EU, which is proposing groundbreaking legislation, will heavily restrict the use of AI in critical infrastructure, education, law enforcement and the judicial system.
At home, no AI-specific legislation has yet been introduced in Australia and any AI governance relies on guidance and existing law such as the Privacy Act.
Consequently, this has led to a number of calls for an AI-specific legislative framework to handle the challenges and issues relating to this new technology.
Likewise, the US has no formal rules to bring oversight to AI technology, although its National Institute of Science and Technology (NIST) has put out a national framework offering guidance on managing risks and potential harms.
China, where ChatGPT is not available, has introduced regulations on deep fakes, while several large tech companies in the country are developing alternatives.