According to reports from TechCrunch, Meta's standalone AI app has become a privacy nightmare, with users unknowingly publishing their private conversations with the chatbot to a public "Discover" feed that exposes sensitive personal information including medical queries, financial matters, and even home addresses.
The root of Meta AI's privacy issues lies in its deceptive user interface that fails to clearly communicate when conversations become public. Users interact with the chatbot assuming privacy, but the app's sharing mechanism creates a multi-step process that ultimately publishes their conversations to a global audience without adequate warnings.12 This problematic design presents a "share" button that many users click without understanding the consequences, believing they're saving conversations privately rather than broadcasting them worldwide.31
When users log into Meta AI through Instagram with a public profile, their AI activity automatically inherits these public settings, further compounding the confusion.4 Security experts have criticized this as a "dark pattern" that particularly impacts older users who may be less familiar with social media privacy norms.5 The interface provides no clear indication of current privacy settings during interactions, leaving many to treat the AI as a confidential advisor while unknowingly creating public content accessible to anyone.12
The Discover feed on Meta AI has become a repository of alarmingly personal information that users never intended to share publicly. Security experts have documented numerous instances of exposed sensitive data, including audio recordings of embarrassing personal questions, requests for help with potentially illegal activities like tax evasion, and character reference letters containing individuals' full names.12 Even more concerning are the instances of exposed home addresses, court case details, and intimate medical queries about conditions ranging from vaginal odors to cancer diagnoses.34
Some of the most disturbing examples include users sharing details about extramarital affairs, relationship problems, and even drafting suicide notes through the AI assistant, all unwittingly published to the public feed.4 Rachel Tobac, CEO of SocialProof Security and renowned white hat hacker, has highlighted numerous examples of this inadvertent oversharing, emphasizing that users don't expect their private AI conversations to appear on a public social platform.56 The Mozilla Foundation has responded by demanding Meta shut down the Discover feed until proper privacy protections can be implemented to prevent this widespread exposure of personal information.4
The technical architecture of Meta's AI app reveals fundamental privacy flaws beyond just confusing interface design. Despite requiring multiple steps to share content, the system obscures the public nature of posts through what experts call "terrible UX" that tricks users into oversharing.12 The app, downloaded only 6.5 million times since its April 2025 launch, automatically makes AI activity public for users with public Instagram profiles.34
Meta's privacy policy compounds these issues by collecting vast amounts of user data across platforms for AI training purposes. While the company claims private WhatsApp messages aren't directly used, it still harvests metadata—information about messaging patterns and frequency—to create detailed behavioral profiles.5 For most users outside Europe and Brazil (where stringent data protection laws apply), there's no straightforward way to opt out of having personal information used for AI training, creating a troubling global disparity in privacy rights.67
To safeguard your information when using Meta AI, navigate to Settings & Privacy > Privacy Center, scroll to Privacy Topics, select "AI at Meta," and click "Submit an objection request" under "Your messages."1 For more comprehensive protection, tap your profile icon, select "Data & Privacy" under "App settings," then "Manage your information," and finally "Make all your prompts visible to only you."2
Security experts recommend exercising extreme caution with any sensitive inquiries until Meta implements proper privacy safeguards. A good rule of thumb is to assume nothing shared with AI is confidential.3 For those concerned about Meta's broader data collection practices, you can disable "Activity Off-Meta" in settings to limit third-party data sharing, though this won't fully prevent internal AI training.4 European and Brazilian users have additional opt-out rights due to stronger regional privacy laws, though the process is intentionally complex to discourage users from exercising these options.54