
Meta has announced it will soon begin using public data from adult users in the European Union — including posts, comments, and AI interactions — to train its generative AI models, raising concerns about the boundaries of consent and user awareness across its major platforms.
Starting this week, EU-based users of Meta platforms will be notified about the change and offered a way to object to the use of their data. The training dataset will include public content from Facebook and Instagram, as well as user interactions with Meta AI, which was launched across Europe last month. Meta explicitly states that private messages — including those on WhatsApp and Messenger — will not be used, and data from users under 18 is excluded.
This marks the first time Meta has tapped into European user content for training its AI systems, following a delay last year while awaiting regulatory clarification. The firm now says it is proceeding with confidence, citing favorable feedback from the European Data Protection Board (EDPB) and ongoing cooperation with the Irish Data Protection Commission (IDPC), its primary EU regulator.
Meta Platforms, Inc. — which owns Facebook, Instagram, WhatsApp, and Messenger — is one of the largest data-centric tech firms globally. With the rollout of Meta AI in March 2025, users across Europe gained access to generative AI features directly within their social and messaging apps. The company now wants its models to be better tuned to regional languages, cultural nuances, and local forms of expression, which it argues requires training on content created by European users themselves.
The training dataset will include:
- Public posts and comments shared by adults on Facebook and Instagram
- User queries and interactions with Meta AI chat features embedded in Facebook, Instagram, Messenger, and WhatsApp
Although WhatsApp messages remain off-limits for AI training, the AI prompts and interactions users submit within the WhatsApp interface may still be processed for model improvement — something not all users may realize. Meta’s framing of this process emphasizes “transparency,” but the opt-out mechanism remains opt-out by default, a choice that privacy advocates continue to criticize under the General Data Protection Regulation (GDPR), which typically requires explicit, informed consent for data processing at this scale.
While Meta claims its practices are aligned with industry norms — citing Google and OpenAI as having similarly used European data for AI training — it also positions itself as more transparent in providing opt-out tools. Users will begin receiving notifications via email and in-app banners with a link to an objection form. Meta has committed to honoring both new and previously submitted objections.
Privacy-conscious users in the EU should review the opt-out form as soon as the notification appears and be cautious when interacting with Meta AI, particularly in sharing personal or sensitive information, even if messages are not classified as “private.”
As generative AI features become more deeply embedded in communication platforms, the risks of silent data harvesting grow. This latest move from Meta underscores the tension between building “regionally attuned” AI and respecting user agency over personal data — especially in regions like Europe, where digital privacy remains a fundamental right.
Leave a Reply