ChatGPT Sued: Privacy Complaint Filed Over Fake Child Murder Tale
A privacy complaint has been filed in Norway against ChatGPT for fabricating a murder story.

OpenAI has come under severe criticism, leading to a privacy complaint from Norway concerning its AI chatbot ChatGPT's tendency to generate false information. The case, supported by the privacy advocacy group Noyb, was filed by Alf Harmar Holmen. He was shocked and angered to find ChatGPT falsely claimed he was convicted of murdering two children and attempting to kill a third.
History of Inaccuracies
There have been past privacy complaints regarding ChatGPT mainly involving inaccuracies in basic personal data, such as birth dates or biographical details. A key issue is OpenAI's lack of a robust mechanism for individuals to correct AI-generated misinformation. While OpenAI typically blocks responses generating such errors, the EU's General Data Protection Regulation (GDPR) grants Europeans various data access rights, including the right to rectification.
Noyb has also observed that the GDPR mandates the accuracy of personal data, granting users the right to correction if the information is inaccurate. Noyb lawyer, Joachim Søderberg, argues that OpenAI's simple disclaimer stating "ChatGPT may make mistakes" at the bottom is insufficient. The GDPR, according to Noyb, holds AI developers responsible for ensuring their creations don't spread serious falsehoods.
Enforced Regulations
GDPR violations can result in fines of up to 4% of global annual revenue. In Spring 2023, Italy's data protection authority temporarily blocked access to ChatGPT, prompting OpenAI to adjust its user information disclosures. Nevertheless, European privacy regulators have adopted a more cautious approach towards generative AI in recent years, seeking appropriate regulatory frameworks.
Noyb's new complaint aims to raise regulatory awareness of the potential dangers of AI-generated misinformation. They shared a screenshot of an interaction showing ChatGPT fabricating a completely false and disturbing history in response to questions about Holmen. Noyb has highlighted other users suffering similar damage from false information, indicating this isn't an isolated incident.
Although OpenAI stopped the false accusations against Holmen after model updates, Noyb and Holmen remain concerned that the erroneous information might persist within the AI model. For this reason, Noyb has filed a complaint with the Norwegian Data Protection Authority, hoping for an investigation.