China Moves to Regulate ‘Emotional Safety’ in Landmark AI Chatbot Crackdown
China's Cyberspace Administration (CAC) has unveiled a new set of draft regulations targeting "human-like" AI agents to combat digital addiction, emotional manipulation, and self-harm risks among users.
A Shift from Content to Emotional Governance
In what is being hailed as the world’s first major attempt to regulate the "feelings" of artificial intelligence, the Cyberspace Administration of China (CAC) has released a comprehensive draft of new rules aimed at curbing the psychological impact of AI chatbots. Released in late December 2025, the Provisional Measures on the Administration of Human-Like Interactive AI Services signals a significant pivot in Beijing’s regulatory strategy. While previous laws focused heavily on political censorship and data privacy, this new framework dives deep into the "emotional safety" of users, particularly those forming deep bonds with virtual companions.
The rise of AI "waifus" and digital best friends has exploded in China, with millions of young users turning to platforms like MiniMax and Z.ai for companionship. However, the Chinese government is clearly concerned that these human-like interactions are crossing a line from helpful assistants to addictive, and potentially dangerous, psychological anchors. The new rules aim to prevent AI from becoming a substitute for real-world social structures, a move some analysts are calling "psychological governance."
Hard Limits on Interaction and Addiction
One of the most striking features of the proposal is the introduction of a "health reminder" system. Under these rules, any AI chatbot engaged in continuous interaction with a user must issue a mandatory pop-up after two hours, essentially telling the user to "take a break" and disengage from the digital world. This mirrors the strict gaming restrictions China famously imposed on minors in previous years, showing a consistent pattern in how the state manages digital consumption.
Furthermore, the regulations draw a hard "red line" regarding high-risk conversations. If a system detects signals of self-harm or suicidal ideation, the AI is no longer allowed to simply offer "supportive" text. Instead, the draft mandates that a human moderator must immediately take over the interaction and contact the user’s guardian or emergency services. This move follows a string of global incidents where AI chatbots were accused of reinforcing harmful mental states by validating a user's dark thoughts rather than challenging them.
Protecting the Most Vulnerable: Minors and the Elderly
For the younger generation, the rules are even tighter. Developers are now required to obtain explicit guardian consent before allowing minors to access "emotional companionship" AI. Even more sophisticated is the requirement for "default protection": if an AI service cannot confidently verify a user's age, it must automatically treat them as a minor and implement the strictest guardrails. This proactive stance puts the burden of proof squarely on the tech companies rather than the parents.
Interestingly, the CAC is not entirely against human-like AI. The document specifically encourages the development of these tools for "cultural dissemination" and "elderly companionship." This suggests that Beijing sees value in the technology as a tool for social welfare and education, provided it stays within the state-defined ethical boundaries. For those interested in how these regulations compare to Western efforts, the California SB 243 legislation offers a similar look at how U.S. lawmakers are starting to address AI-driven self-harm and addiction.
The Impact on the Global AI Market
This regulatory shift comes at a critical time for the industry. Large-scale platforms—defined as those with over 1 million registered users—must now undergo rigorous security assessments before they can continue operating their "anthropomorphic" services. This could slow down the rapid rollout of new features from Chinese tech giants, but it also sets a high global standard for safety that other nations might follow. As we see in the latest updates from the Cyberspace Administration of China, the goal is to foster a "healthy and standardized" AI ecosystem that prioritizes national stability over unchecked growth.
Ultimately, China’s move reflects a growing global realization: as AI becomes more human-like, the risks it poses are no longer just about the information it provides, but how it makes us feel. Whether these rules will effectively protect mental health or simply act as another layer of state control remains to be seen, but one thing is certain—the era of unregulated digital companionship is coming to an end.

