OpenAI Faces Privacy Crisis: ChatGPT Fabricates Deadly Lies, European Regulators Hold Company Accountable

OpenAI’s chatbot, ChatGPT, is once again at the center of privacy complaints in Europe after it fabricated deadly lies about a user. The complaint was filed by privacy rights advocacy group Noyb, supported by a Norwegian man who was shocked to discover that ChatGPT had created a false story claiming he had murdered two children and attempted to kill a third.

This incident highlights the risks of ChatGPT generating inaccurate information, sparking major concerns about data protection and privacy responsibilities. Noyb argues that ChatGPT’s actions violate the European Union’s General Data Protection Regulation (GDPR), especially regarding the accuracy and correction of personal data.

“Fabricated Murder Case” Shocks Local Community

The man in question, Arve Hjalmar Holmen, was taken aback when he engaged in a conversation with ChatGPT, asking the AI for some information about himself. Instead of providing accurate details, the AI concocted a completely false narrative, claiming that Holmen had been convicted of murdering two sons and attempting to kill a third. While the AI’s response was partly accurate regarding the number and gender of his children, the fabricated murder story was deeply unsettling.

Noyb emphasizes that despite the partial accuracy of certain details—like the children’s gender and family residence—the fabricated murder claim caused significant emotional distress to Holmen.

Privacy and Data Accuracy: A Harsh Test for GDPR

Noyb’s legal team argues that the GDPR clearly mandates that personal data must be accurate, and individuals have the right to request corrections when inaccuracies arise. The brief disclaimer included by OpenAI, stating that the AI may be wrong, does not absolve them of responsibility. OpenAI must take accountability for generating false information.

Under GDPR, data controllers (in this case, OpenAI) must ensure that all personal data they process is accurate and that individuals have a clear mechanism to correct errors. However, ChatGPT does not currently offer a means to correct false information, which may expose the company to potential penalties. In fact, GDPR violations can result in fines up to 4% of a company’s global annual revenue.

European Regulators Tightening Legal Accountability

This case has drawn significant attention from European regulators, who are increasingly concerned about AI-generated content. Noyb has filed the complaint in Norway, urging relevant regulatory authorities to conduct a thorough investigation of OpenAI’s practices. Other related cases are also ongoing. For instance, the Polish privacy regulator has yet to issue a formal response to OpenAI’s ChatGPT since September 2023.

While some countries have taken a more cautious approach, stating they are not rushing to ban generative AI tools outright, the dangers posed by fabricated information may push regulators to shift their stance. OpenAI has already faced regulatory action in multiple countries, including Italy, where the company was temporarily blocked and fined €15 million in 2023 for failing to fully comply with data protection regulations.

AI Fabrication Issues: From Ethics to Legal Challenges

This case once again underscores the potential risks posed by AI when it comes to fabricating information, particularly from an ethical and legal standpoint. Noyb‘s lawyers argue that AI companies cannot simply rely on disclaimers to evade responsibility. “You cannot just add a disclaimer saying, ‘we might be wrong,’ and then disregard the harm caused,” they said.

As AI tools become more ubiquitous, there is growing concern about their potential to damage individuals’ privacy and reputation. AI companies need to shoulder more responsibility—not just by avoiding the spread of false information, but by ensuring that such information is not fabricated in the first place.

Future Regulation and Responsibility: How Will OpenAI Handle Privacy Challenges?

Noyb’s new complaint represents not only a challenge for OpenAI but also a shared problem faced by AI companies worldwide: How to ensure their technology does not lead to legal and ethical consequences due to fabricated information. With European regulators tightening their scrutiny of AI tools, how OpenAI responds to these challenges will have a significant impact on its compliance in the global market.

In the future, we may see an increase in privacy complaints and regulatory actions, forcing AI companies to prioritize data protection and user rights in their product design. Balancing innovation with privacy protection is quickly becoming a critical issue that AI companies must address.