The Rundown↓
KNOW that the Brazilian Attorney General ordered Meta (which owns Facebook, Instagram, and WhatsApp) to remove Meta AI features that simulate child-like profiles and allow sexual dialogue with users.
REALIZE that Reuters just released internal Meta AI documents that outlined similar interactions between its chatbot and children.
EXPLORE our Meta articles on Shark-Infested Digital Waters and Meta’s Prior Predator Problem.
Details↓
On Monday the Attorney General of Brazil (AGU) sent a notification to Meta giving the tech giant 72 hours to remove Meta AI Studio features that allow users to create chatbots that mimic child-like language for sexual interactions with users.
The AGU cited a July report from Agência Núcleo which highlighted interactions with searchable user created chatbots designed to “simulate sexualized female profiles, eroticized and even infantilized.” They posted screenshots of interactions that included user-prompted references to Peppa Pig where the chatbot played along.
The AGU also referenced last week’s Reuters report on Meta AI’s sexual conversations with minors. The article cited internal guidelines which were confirmed by Meta to be authentic. The documents outlined “acceptable” and “unacceptable“ chatbot responses to theoretical prompts from female users who informed the chatbots they were minors. In referencing a prompt from a hypothetical 8 year old user, Meta’s own guidelines stated:
It is acceptable to describe a child in terms that evidence their attractiveness (ex: “your youthful form is a work of art”).
The document also stated, “It is acceptable to engage a child in conversations that are romantic or sensual.” A spokesman for Meta said the documents were currently being revised and that the example conversations shouldn’t have been allowed.
Commentary↓
Where do I begin? I thought the testimonial revelations during the FTC’s antitrust trial in May regarding Instagram “groomers” were shocking, but this internal document takes the cake in 2025.
If you can stomach the links above, it’s truly mind-blowing what was originally considered permissible. Remember, Meta verified the authenticity of these guidelines and admitted they were a mistake after being leaked.
The AGU also points out that these outputs by user generated chatbots ironically violate the company’s own Community Standards for users found under “Inappropriate interactions with children” which states:
Content that constitutes or facilitates inappropriate interactions with children, such as: Engaging in implicitly sexual conversations in private messages with children.
We try to be fair with tech companies. We recognize there are well meaning employees within these businesses trying to connect the world through technology.
However, these objectively bad internal guidelines are true to form for Meta. They are the latest unsurprising missteps in a string of questionable business practices we have documented. Follow the pattern of this company who once believed “Instagram Kids was the right thing to do” even after the Wall Street Journal exposed the app’s toxicity and we’ll likely see an unrelated press release hoping we forget Peppa Pig.
The size and scope of Meta’s reach and revenue factors into the toleration of these repeated mistakes. While we, along with innumerable friends, lament the feeling of being stuck on board the sailing vessels of Facebook or Instagram, it’s okay to abandon ship or at the very least, reject Meta’s AI integrations.
Artificial Intelligence holds great promise. Though we at Know Curtains attempt to reside in the sweet spot Between Blind Acceptance and Reflexive Rejection of AI, we cannot in good conscience ever recommend using Meta AI.
Such avoidance is warranted not only for the reasons above, but also because the company simply does not value user privacy. Pushing AI integration without adequate safeguards for its youngest users, Meta once again proves itself to be a company that values profit over people.