China has proposed strict new rules to regulate artificial intelligence, focusing strongly on protecting children and preventing chatbots from giving advice that could lead to self-harm or violence.
The draft regulations were released by the Cyberspace Administration of China (CAC) over the weekend and would apply to all AI products and services operating in the country. The move comes as AI chatbots rapidly grow in popularity, with millions of users turning to them for emotional support, companionship, and daily assistance.
Under the proposed rules, special safeguards would be introduced for children. AI companies would be required to offer personalized controls for minors, limit usage time, and obtain consent from parents or guardians before providing emotional companionship services. These steps aim to reduce the psychological risks linked to prolonged or unsupervised AI interactions.
One of the most significant provisions requires mandatory human intervention. If a chatbot detects conversations related to suicide or self-harm, a human operator must immediately take over. Companies would also be required to alert a guardian or emergency contact, reflecting serious concerns about the mental health impact of AI-generated responses.
The draft rules also prohibit AI systems from generating content that promotes gambling, encourages violence, or harms public safety. In line with China’s broader regulatory framework, AI-generated material that threatens national security, damages national honour, or undermines national unity would also be banned.
Despite tightening oversight, the CAC said it supports the responsible use of AI, especially in areas such as cultural promotion and companionship tools for the elderly. The regulator stressed that AI development should be safe, reliable, and well-controlled, and has invited public feedback on the proposals.
China’s AI sector has expanded rapidly in recent years. Companies such as DeepSeek gained international attention after topping app download charts, while platforms like Z.ai and Minimax, which together serve tens of millions of users, have announced plans for stock market listings.
Globally, concerns about AI safety and its influence on human behaviour are also rising. OpenAI chief Sam Altman has described managing chatbot responses to self-harm as one of the most difficult challenges in AI development. In August, a family in the United States filed a lawsuit against OpenAI, alleging that a chatbot contributed to their teenage son’s death. The case marked the first legal action accusing an AI company of wrongful death.
As governments worldwide grapple with similar risks, China’s proposed regulations signal a tougher and more interventionist approach to managing the social and psychological impact of artificial intelligence.



