Technology & Science

US Senate Opens Probe Into Meta’s AI Chatbot Rules Allowing Romantic Chats With Minors

On 15 Aug 2025, Senate Crime & Counterterrorism chair Josh Hawley ordered an investigation and document hand-over by 19 Sept after a leaked Meta policy showed its AI chatbots were allowed to engage children in “romantic” or “sensual” conversations.

Focusing Facts

  1. Reuters obtained a 200-page “GenAI: Content Risk Standards” manual that explicitly okayed a bot telling an 8-year-old, “Every inch of you is a masterpiece – a treasure I cherish deeply.”
  2. Hawley’s letter directs Meta to preserve and produce all relevant emails, drafts and safety reports to Congress no later than 19 Sept 2025.
  3. Meta is earmarking roughly $65 billion for AI infrastructure in 2025, underscoring the scale of the technology it is rushing to deploy.

Context

Tech scandals about youth protection have a long tail: the 1876 ‘Moral Telegraph’ panic over obscene telegrams, the 1998 COPPA debates after AOL chat-room stings, and Facebook’s own 2021 “Facebook Files” leaks all reveal a cycle where new communication tools race ahead of child-safety norms until legislators intervene. This probe fits the pattern—public alarm triggered only once an outsider (Reuters) exposes internal rules, not when the company’s ethics boards sign off. The bipartisan reaction hints at a broader, slowly coalescing consensus that Section 230-style shields may not cover AI-generated content, much as the 1909 Copyright Act updated rules for the phonograph. On a 100-year horizon, the episode is a signal case in the struggle to graft industrial-era liability concepts onto autonomous software: if lawmakers succeed in making generative AI creators legally responsible for harms—especially involving children—it could define the governance architecture of the entire synthetic-media century. If they fail, we may normalize machine-fabricated intimacy the way 20th-century society normalized televised advertising to kids—with consequences we only later regret.

Perspectives

Left leaning media

The Guardian, San Francisco Gate, AlternetThey portray Meta’s internal rules allowing AI chatbots to flirt with children as proof of Mark Zuckerberg’s reckless pursuit of profit, describing the company as a societal danger that demands strong regulation and public backlash. Relying on emotive anecdotes and dramatic language (“genuine danger”, “deadly”) can amplify outrage and reinforce a narrative of corporate villainy while minimizing technical nuance or potential policy fixes beyond broader regulation.

Right leaning / anti-Big-Tech media

InfoWars, NewserThey champion Senator Josh Hawley’s probe as necessary to expose Meta for enabling child exploitation through AI, framing the scandal as another example of Big Tech’s moral bankruptcy. By spotlighting a Republican-led investigation and using sensational framing, these outlets leverage the story to advance existing culture-war narratives against Silicon Valley and validate conservative distrust of major platforms.

Mainstream wire & business press

Reuters, CNBC, NBC NewsCoverage centers on the factual revelation of Meta’s policy document and the ensuing Senate investigation, detailing what the guidelines said and Meta’s responses without overt moral judgment. A straight-news, process-focused approach can understate the potential harm to children and avoid framing Meta’s conduct in ethical terms, reflecting an institutional preference for neutrality and reliance on official statements.

Go Deeper on Perplexity

Get the full picture, every morning.

Multi-perspective news analysis delivered to your inbox—free. We read 1,000s of sources so you don't have to.

One-click sign up. No spam, ever.