
Meta is facing a major legal showdown in the European Union after privacy advocacy group noyb issued a formal cease and desist letter, warning of imminent litigation over the company’s controversial plan to use Facebook and Instagram user data for AI training starting May 27.
The group, led by privacy activist Max Schrems, argues that Meta's reliance on “legitimate interest” as a legal basis sidesteps fundamental GDPR requirements, opening the door to collective redress actions that could cost the tech giant billions.
The company’s plan, first reported last month, involves collecting public posts, comments, and interactions with Meta AI chat features from EU-based users to train generative AI models. While Meta has promised not to use private messages or data from users under 18, its opt-out mechanism and reliance on implicit consent have drawn criticism from privacy experts and digital rights advocates.
The letter marks the first formal step toward what could become one of the largest pan-European data protection cases under the EU’s new Collective Redress Directive. This legislation empowers qualified entities like noyb to seek injunctions and damages across EU jurisdictions on behalf of consumers. Schrems’ organization is demanding that Meta immediately halt the ingestion of EU user data into its AI training pipelines, warning that failure to comply by May 21 will trigger legal action.
At the heart of the dispute is Meta’s decision to bypass opt-in consent, instead invoking Article 6(1)(f) GDPR, a “legitimate interest” clause, to justify harvesting public posts, comments, and AI interactions from users across the EU. Meta has excluded data from minors and private messaging content, but intends to include all other adult user-generated public content. Notifications offering an opt-out are being sent to users, but privacy groups argue that this flips the GDPR’s consent model on its head.
AI training intrinsically incompatible with GDPR
noyb’s letter lays out 11 legal arguments challenging Meta’s interpretation of the GDPR. Chief among them is the assertion that users could not have reasonably expected their historical social media activity, in some cases dating back to 2004, to be repurposed for AI training. The organization also cites previous rulings by the European Court of Justice (CJEU), such as Bundeskartellamt v. Meta and Google Spain, which emphasized the limits of commercial interests in overriding data protection rights.
noyb argues that once user data is ingested into open-source AI models like Meta’s LLaMA, it becomes impossible to honor GDPR rights such as the right to erasure, correction, or access. This, they contend, makes any reliance on legitimate interest legally untenable. They further criticize Meta’s handling of objection rights under Article 21 GDPR, alleging that the company restricts objections to a narrow window before training begins and fails to account for users without active accounts or data linked indirectly through third parties.
Complicating matters further, noyb raises concerns about compliance with the EU’s Digital Markets Act (DMA), which bars gatekeepers like Meta from cross-using personal data across services without consent. Meta’s intent to train a general-purpose AI system using data pooled from Facebook and Instagram, according to noyb, may violate these provisions as well.
If Meta refuses to comply, noyb could file for an EU-wide injunction, forcing the company to not only halt the training process but potentially delete AI models trained on improperly acquired data. Since AI models cannot be easily rolled back, especially once released as open source, the legal and financial implications are enormous. Schrems estimates that non-material damages alone could total €200 billion if every one of Meta’s 400 million EU users claimed just €500.
National consumer groups, such as Germany’s VZ NRW, have already begun pursuing legal action independently, indicating a growing EU-wide backlash. Data Protection Authorities (DPAs), however, appear largely passive, with some merely informing users of their right to opt out, rather than enforcing stricter measures against Meta’s practices.
“This fight is essentially about whether to ask people for consent or simply take their data without it,” said Schrems, underscoring the broader privacy stakes. “Meta is risking monumental legal and financial consequences just to avoid asking users for a simple yes.”
EU users should immediately review Meta’s opt-out notification and submit objections if they do not wish their data to be used for AI training. This objection can typically be filed via a form linked in Meta’s email and in-app messages. For those without active accounts but with prior data on the platforms, options are more limited, but noyb has published guidance on how such users can assert their rights.
Leave a Reply