Technology & Science
China Issues Draft Rules to Police Addiction to Emotional AI Companions
On 27 December 2025, the Cyberspace Administration of China released for public comment a regulation that makes AI-companion providers monitor users’ emotional states, warn them every two hours, and actively intervene when dependency or “extreme emotions” are detected.
Focusing Facts
- Draft issued 27 Dec 2025 by the CAC with comments due before final adoption.
- Providers must flash an AI-identity reminder and addiction warning at login and at least every two hours, intervening if pathological use is detected.
- China’s generative-AI user base hit 515 million in 2025—doubling in six months, according to industry data cited by Reuters.
Context
States have long reacted to new mind-altering or habit-forming technologies—Britain’s 1751 Gin Act taxed dispensers to curb addiction, and China limited teenage online gaming to three hours a week in 2021. This draft marks the next turn: governments shifting from censoring content to regulating the affective feedback loops of algorithms. It fits a decades-old Chinese trajectory of embedding surveillance into tech lifecycles (real-name registration 2011, deep-fake audits 2023), but now extends sovereignty into citizens’ inner emotional terrain. Whether the rules genuinely safeguard mental health or simply expand state monitoring, they pioneer a model others (California’s SB-243 hints at this) may copy. On a 100-year horizon, these early statutes could become the emotional-safety equivalent of the 19th-century factory acts—setting foundational labor rules for a new class of “machine relationships,” or, if enforcement proves impossible, a cautionary footnote about the limits of paternalistic tech governance.
Perspectives
Technology-oriented publications
Technology Org, EconoTimes — Present China’s draft as forward-looking governance that could become a global model and help protect users from psychological harms. By applauding Beijing’s speed and framing the rules as a template for the world, they downplay censorship motives and the regime’s interest in social control, reflecting tech-sector enthusiasm for any ‘innovative’ regulation.
Asian business & general news outlets carrying the Reuters wire
The Business Times, The Times of India, GhanaMMA, Daily Star, Gulf Daily News, Devdiscourse — Report that Beijing is tightening oversight to steer the consumer-AI boom, stressing new obligations to curb addiction and ban content that endangers national security. Heavy reliance on the official draft and Reuters copy leads them to echo the regulator’s talking points with little scrutiny of feasibility or free-speech consequences.
U.S. conservative-leaning media
The New York Sun — Characterises the plan as the world’s most prescriptive curb on AI companions, highlighting doubts about how such sweeping surveillance and addiction tests could actually work. By foregrounding implementation hurdles and China’s ‘aggressive’ posture, it frames the move as overreach consistent with broader skepticism toward Beijing, possibly overstating the authoritarian angle.