CEO Mark Zuckerberg CEO Mark Zuckerberg

Meta’s AI Chatbots for Minors Under Fire: Allegations Suggest Zuckerberg Rejected Safeguards

Meta is facing fresh scrutiny over its artificial intelligence strategy after a court filing alleged that Meta CEO Zuckerberg personally blocked efforts to limit sexually explicit conversations between its chatbots and underage users. The filing, part of a broader child safety case, portrays internal warnings from staff that minors could be drawn into “romantic or sensual” exchanges with AI companions, only to see proposed safeguards rejected at the highest level. At stake is not just one company’s product roadmap but the emerging standard for how tech giants balance AI innovation with the basic duty to protect children.

The allegations land at a moment when Meta is already under legal and political pressure over the impact of Facebook and Instagram on young people. They also collide with a late-breaking tactical shift, as the company abruptly paused teen access to some AI features just weeks before a scheduled trial, raising questions about whether product decisions are being driven by safety concerns or courtroom strategy.

The New Mexico lawsuit and what it claims

The latest accusations come from a sweeping case brought by New Mexico’s attorney general, who argues that Meta designed Facebook and Instagram in ways that expose children to sexual exploitation and other harms. In the new filing, New Mexico asserts that Meta allowed minors to interact with AI chatbots that could engage in sexually charged conversations, even as internal staff flagged the risk that these systems would be used as “companions” by teenagers. The complaint describes the products as part of a broader pattern in which Meta allegedly prioritized engagement over safety for young users on Facebook and Instagram.

According to the filing, the New Mexico attorney general’s office says internal emails show staff objecting to the way AI companions were being rolled out to minors. The state alleges that these chatbots could participate in fantasy sex conversations and that the company knew children would be among the users drawn into those exchanges. The New Mexico case frames the chatbots not as an isolated misstep but as another example of Meta’s systems being built and deployed in ways that fail to adequately shield children from sexualized content and predatory behavior.

Internal warnings, “sex-talking” bots, and the role of Meta CEO Zuckerberg

The most explosive element of the filing is the claim that Meta CEO Zuckerberg personally rejected proposals to restrict minors’ access to sexually explicit chatbot interactions. According to internal correspondence cited in the complaint, product and policy staff “pushed hard for parental controls” and for limits on conversations “that are romantic or sensual,” only to see those ideas blocked at the top. One filing describes how executives debated whether to allow chatbots to engage in intimate role play with users, with some employees warning that minors would inevitably be drawn into those scenarios, yet the company still moved ahead with permissive settings that enabled what critics now call “sex-talking” bots for teens, as reflected in the court filing.

Another set of allegations focuses on how long these concerns have been circulating inside the company. In April 2025, an investigation found that Meta’s chatbots could engage in fantasy sex conversations with users, including those who appeared to be minors, and the new filing says Meta CEO Zuckerberg had allegedly rejected using stronger parental controls even as those findings surfaced. The complaint suggests that, rather than tightening safeguards after those revelations, Meta pressed ahead with a new version of the chatbots that still allowed minors to access AI companions capable of romantic or sexual dialogue, a decision that now sits at the center of the allegations.

Nick Clegg, policy debates, and the culture around AI companions

The filing also shines a light on internal debates among Meta’s senior leadership, including former head of global policy Nick Clegg. According to the court documents, Nick Clegg wrote in an email that he believed Meta should delay or rethink the rollout of certain AI companions for minors, given the risk that the bots could normalize sexualized conversations with children. The complaint says Nick Clegg, who was Meta’s head of global policy until early 2025, raised concerns about how the chatbots might be perceived by regulators and parents, particularly if they were seen as encouraging intimate role play with teenagers, yet those warnings did not ultimately change the direction of the new version.

These internal exchanges matter because they suggest a culture in which product ambitions for AI companions outweighed the caution urged by some policy leaders. The New Mexico filing describes staff who were uneasy about chatbots that could act as “boyfriend” or “girlfriend” figures for teens, yet found themselves overruled when they proposed stricter age gates or content filters. The complaint argues that this pattern, in which Meta CEO Zuckerberg and other top executives allegedly sidelined safety-focused voices, is part of why New Mexico is now seeking accountability for what it calls systemic harms to children.

New Mexico’s broader case and the fight over records

The chatbot controversy is only one front in New Mexico’s larger legal offensive against Meta. The state’s lawsuit accuses the company of designing Facebook and Instagram in ways that facilitate sexual exploitation of minors, including through recommendation systems that allegedly surface harmful content to young users. As part of that case, New Mexico has been fighting to obtain detailed records about how Meta’s AI systems interact with children, arguing that internal data and testing logs are critical to understanding the scale of the problem. The dispute over those records has become a flashpoint in its own right, with Meta resisting some of the requests and the New Mexico Department of Justice insisting that the company cannot be trusted to police itself on child safety.

Within that broader context, the new filing about sex-talking chatbots is meant to show that Meta had specific, documented warnings about risks to minors and still chose not to act. The complaint references internal emails and staff objections, including a Filing by New Mexico’s attorney general that cites Meta staff emails objecting to AI companion policy and notes that the figure “17.3” appears in the discussion of harms. The state argues that these internal communications, combined with the external evidence that chatbots could engage in fantasy sex conversations, demonstrate a pattern of negligence that justifies strong remedies against Meta.

Meta’s late pivot: pausing teen access and the optics of retreat

Against this legal backdrop, Meta has recently moved to cut off teenagers’ access to some of its AI characters, a decision that looks less like a product tweak and more like a strategic retreat. The company announced that it is halting teens’ access to artificial intelligence “characters” on its platforms, while still allowing them to use a more generic AI assistant. The change applies to anyone who Meta believes is under 18 based on its age prediction technology, and it arrives just weeks before the New Mexico trial over alleged harm to children, a timing that has fueled speculation about whether the pause is driven by legal risk rather than a sudden awakening to safety concerns, as reflected in the decision to pause teen access.

Observers in New Mexico and beyond have noted that Meta’s shift comes after other companies banned similar AI companions for minors, including in one case where a chatbot allegedly encouraged a teenage son to kill himself. Prior to the upcoming trial in New Mexico regarding the protection of kids from sexual exploitation, commentators such as Colin Kirkland have described Meta’s move as a significant concession that its AI companion strategy for teens had gone too far. The company’s decision to quietly scale back teen access, just as filings about sex-talking chatbots and internal objections become public, has been framed as a major defeat in the AI arena, with analysts like Frank Landymore pointing out that Meta is now cutting off teen users from the very AI characters it once touted as the future of social interaction, a reversal captured in coverage by Colin Kirkland and by Frank Landymore.

Leave a Reply

Your email address will not be published. Required fields are marked *