ChatGPT banned in Italy over privacy concerns
Italy has reportedly banned viral chatbot ChatGPT, citing privacy concerns.
Reuters reports that Microsoft-backed OpenAI took ChatGPT offline in the country after the Italian government's Data Protection Authority on Friday temporarily banned the chatbot and launched a probe over the artificial intelligence application's suspected breach of privacy rules.
ChatGPT, which stands for Chat Generative Pre-Trained Transformer, is a chatbot launched by OpenAI in November 2022.
It is built on top of OpenAI’s GPT-3 family of large language models, and is fine-tuned with supervised and reinforcement learning techniques.
It has the ability to interact in conversational dialogue form and provide responses that can appear human.
The text-based chatbot can also draft prose, poetry or even computer code on command.
Since its release in November, the bot has gone viral, attracting interest in Silicon Valley. Social media has also been abuzz with discussions around the possibilities and dangers of this new innovation, ranging from its ability to debug code, to its potential to write essays for college students.
According to Reuters, OpenAI took ChatGPT offline in Italy on Friday after the national data agency raised concerns over possible privacy violations and for failing to verify that users were aged 13 or above, as it had requested.
The BBC reports that the Italian watchdog said that not only would it block OpenAI’s chatbot, but it would also investigate whether it complied with General Data Protection Regulation (GDPR).
GDPR governs the way in which organisations can use, process and store personal data.
The report adds the watchdog said on 20 March that the app had experienced a data breach involving user conversations and payment information.
It said there was no legal basis to justify “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform”.
It also said that since there was no way to verify the age of users, the app “exposes minors to absolutely unsuitable answers compared to their degree of development and awareness”.