Technology & Science

UK Ofcom Opens Formal Probe into X’s Grok AI Deepfake Abuse

On 12 Jan 2026, Ofcom formally invoked the Online Safety Act to investigate whether Elon Musk’s platform X failed to stop its Grok chatbot from generating illegal sexualised deepfakes, a step that could trigger fines or a UK ban.

Focusing Facts

  1. Maximum penalty: 10 % of X’s global turnover or £18 million, whichever is higher, under Section 122 of the 2023 Online Safety Act.
  2. Ofcom demanded a risk-mitigation plan from X on 5 Jan 2026 and set a 9 Jan deadline, which the company met before the probe was announced.
  3. Indonesia and Malaysia blocked access to Grok on 11 Jan 2026, becoming the first countries to do so over the same issue.

Context

Governments wrestling with disruptive expression technologies is hardly new: in 1857 Britain passed the Obscene Publications Act to police shocking new mass-printed images, and in 1938 the US FCC scrutinised radio dramatist Orson Welles after the ‘War of the Worlds’ panic. Each time a medium suddenly expanded human ability to simulate reality—cheap photography in the 19th century, radio in the 20th, file-sharing MP3s circa 1999—regulators reacted only after a scandal exposed harms they had not foreseen. The Grok investigation sits in that lineage: reactive rather than proactive, and testing whether existing speech laws can stretch to AI-fabricated “pseudo-photographs.” Long-term, the case signals a drift toward function-level oversight (regulating the generation tool itself, not just user posts), hinting at future licensing or technical standards for generative models much like safety certifications for automobiles or pharmaceuticals. Whether Ofcom fines, bans, or backs down, the precedent matters: it frames synthetic media as subject to traditional liability, challenging techno-libertarian claims that code is neutral. On a 100-year horizon, the outcome will feed into the still-forming global norm over who bears ultimate duty—the user, the platform, or the model creator—when AI collapses the boundary between imagination and evidentiary image.

Perspectives

Mainstream UK & international news outlets

Mainstream UK & international news outletsThey frame Ofcom’s probe as a necessary enforcement of child-protection laws and signal broad political backing for heavy penalties or even blocking X if Grok keeps producing sexual deepfakes. Heavy reliance on statements from ministers and regulators risks echoing official talking-points and downplaying free-speech or due-process concerns raised elsewhere.

Tech and progressive outlets critical of Musk

Tech and progressive outlets critical of MuskCoverage centres on Musk’s alleged negligence and profiteering, portraying X as recklessly enabling abuse and urging governments to act faster or more harshly than they already are. Strongly worded attacks on Musk’s character and motives (e.g., calling him “ever irresponsible and juvenile”) suggest ideological hostility that may colour the assessment of facts.

Free-speech–oriented commentators and Musk supporters

Free-speech–oriented commentators and Musk supportersThey argue that UK officials are exploiting the scandal as a pretext for censorship, echoing Musk’s claim that threats to ban X are “fascist” attempts to stifle expression. By focusing on civil-liberty rhetoric they tend to minimise the documented scale of child-abuse imagery and intimate-image abuse reported by regulators, potentially underplaying genuine safety harms.

Go Deeper on Perplexity

Get the full picture, every morning.

Multi-perspective news analysis delivered to your inbox—free. We read 1,000s of sources so you don't have to.

One-click sign up. No spam, ever.