The People’s Republic of China (PRC) has proposed strict new rules to limit and shape the use of AI chatbots. The draft proposal, released this week, applies to humanlike AIs, or “anthropomorphic interactive services,” as translated by Google.
It defines humanlike AIs as systems that simulate “human personality traits, thinking patterns, and communication styles,” and engage in “emotional interaction with humans through text, images, audio, video, etc.”
One notable clause in the wide-ranging document is a limit on long chats with a humanlike AI. If someone “uses the anthropomorphic interactive service continuously for more than 2 hours, the provider shall dynamically remind the user to pause the use of the service through pop-up windows or other means,” it says.
While some people may talk to an AI for hours at their job, others do so for companionship, and the PRC has specific ideas about when AI relationships are acceptable. For example, it encourages the use of them for keeping the elderly company. China has one of the fastest aging populations in the world, according to the World Health Organization. Tech companies must require elderly users to provide an emergency contact during registration, the proposal says.
However, AIs that provide emotional companionship to minors will be subject to strict guidelines. They require the “explicit consent of s,” must have parental controls, and must provide parents a summary of their kids’ use of the services.
US AI firms are starting to implement similar measures following a string of teen suicides allegedly encouraged by chatbots. ChatGPT now offers parental controls and is working on an age-verification system—another requirement for tech companies listed in the PRC’s document. Character.AI banned continuous chatting for kids under 18.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy
Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
The PRC’s proposal will prevent AI systems from “encouraging, glorifying, or implying suicide or self-harm.” It seeks to protect “users’ personal dignity and mental health” by preventing “verbal violence or emotional manipulation.”
While not encouraging violence and self-harm might seem like table stakes for any publicly available consumer product, AI companies have struggled to rein in these behaviors in their chatbots. When companies like OpenAI and Anthropic release new models, they measure rates of lying (hallucinations), deception, racism, and willingness to discuss dangerous topics.
The PRC document does not reference any American-owned AI systems, likely because ChatGPT, Google Gemini, Anthropic’s Claude, and others are not officially available in China.
Recommended by Our Editors
The proposed rules seek to “actively apply” anthropomorphic chatbots for “cultural dissemination” and promoting “core socialist values.” In that vein, the tech will be prohibited from “generating or disseminating content that endangers national security, damages national honor and interests, undermines national unity, engages in illegal religious activities, or spreads rumors to disrupt economic and social order,” the document says.
The PRC says it will monitor how its citizens and companies use anthropomorphic systems nationwide. If tech companies operating in China do not follow these rules, the government will suspend their services. However, at this time, there is no implementation date. The document is open to public comment until Jan. 25, 2026.
China has already experimented with limiting web access for kids. The internet itself is tightly controlled in the region; Chinese chatbot DeepSeek, for example, produces Communist Party propaganda on controversial topics.
About Our Expert
Emily Forlini
Senior Reporter
Experience
As a news and features writer at PCMag, I cover the biggest tech trends that shape the way we live and work. I specialize in on-the-ground reporting, uncovering stories from the people who are at the center of change—whether that’s the CEO of a high-valued startup or an everyday person taking on Big Tech. I also cover daily tech news and breaking stories, contextualizing them so you get the full picture.
I came to journalism from a previous career working in Big Tech on the West Coast. That experience gave me an up-close view of how software works and how business strategies shift over time. Now that I have my master’s in journalism from Northwestern University, I couple my insider knowledge and reporting chops to help answer the big question: Where is this all going?
Read Full Bio
