The Italian Data Protection Authority (GPDP) has concluded its investigation into OpenAI’s ChatGPT, imposing a €15 million fine and mandating a six-month information campaign to address transparency and data protection issues.
This development follows a probe launched in March 2023 after significant concerns were raised about the AI chatbot’s compliance with the EU’s General Data Protection Regulation (GDPR).
The GPDP identified several breaches by OpenAI, including:
- OpenAI did not inform the GPDP about a Redis server bug in March 2023 that exposed users’ chat histories and, in some cases, email addresses and payment details of ChatGPT Plus subscribers.
- User data was processed to train the AI model without adequate legal justification, a direct violation of GDPR requirements.
- OpenAI did not provide sufficient information to users regarding how their data was collected and utilized.
- ChatGPT allowed users under 13 to access the platform, exposing them to potentially inappropriate content and violating GDPR provisions aimed at protecting minors.
The GPDP invoked new powers under Article 166(7) of the Privacy Code, underscoring the gravity of these violations.
To address these breaches, the GPDP has ordered OpenAI to implement a six-month public information campaign. The campaign, to be broadcast via radio, television, newspapers, and online platforms, will educate users about how ChatGPT collects and processes their data, their GDPR rights, including how to object, rectify, or delete personal data, and how they can opt-out of having their data used to train generative AI models.
During the investigation, OpenAI established its European headquarters in Ireland, transferring jurisdiction for ongoing GDPR compliance matters to the Irish Data Protection Commission (DPC). The GPDP has forwarded all procedural documents to the DPC under the GDPR’s “one-stop-shop” mechanism.
OpenAI’s cooperative attitude during the investigation influenced the GPDP’s decision to levy a €15 million fine, which is lower than the maximum penalty of €20 million or 4% of annual global turnover. However, the enforcement actions signal an increase in the regulatory scrutiny of AI technologies in Europe.
Although this action is an important step in GDPR enforcement and an effective response to the growing demand for transparency and accountability in AI-driven services, ultimately, it is users who should be mindful of the data they share with platforms like ChatGPT. It is generally recommended to avoid inputting sensitive information, and if you need to work with sensitive datasets, it would be preferable to use locally hosted generative AI models.
Leave a Reply