The initial progress report from the “GPT taskforce” of the European Data Protection Board (EDPB) has been released, and it brings bad news for ChatGPT. OpenAI’s attempts to align its flagship AI model, ChatGPT, with the regulations of the European Union, including the comprehensive General Data Protection Regulation (GDPR), have been acknowledged by the EDPB but ultimately considered inadequate.
According to the EDPB document, these findings come as OpenAI has faced temporary stop orders from various European member states throughout 2024. As previously reported by Cointelegraph in January, Italy’s data protection agency discovered that both ChatGPT and OpenAI were still in violation of Italian and EU data privacy laws, despite being warned and subsequently banned in March 2023.
The EDPB’s report states that OpenAI has not made sufficient efforts since then to ensure that ChatGPT complies with the EU’s laws. The main concern seems to be that ChatGPT has a tendency to produce inaccurate information. The EDPB explains, “Due to the probabilistic nature of the system, the current training approach leads to a model which may also generate biased or fabricated outputs.”
Furthermore, the report expresses the EDPB’s concerns that end users are likely to perceive the outputs provided by ChatGPT as factually accurate, regardless of their actual accuracy. It remains unclear how OpenAI could bring ChatGPT into compliance. For instance, the GPT-4 model consists of billions of data points and approximately a trillion parameters. It would be impractical for humans to thoroughly examine the dataset to verify its accuracy to a level that meets GDPR standards.
Unfortunately for OpenAI, the EDPB explicitly stated that “technical impossibility cannot be used as a justification for non-compliance with these requirements.”

