China has proposed sweeping new regulations for artificial intelligence aimed at protecting children and preventing chatbots from offering advice that could lead to self-harm, violence or other harmful behaviour.
The draft rules, released at the weekend by the Cyberspace Administration of China (CAC), come amid a rapid expansion of AI-powered chatbots across China and globally. Once finalised, the regulations will apply to all AI products and services operating in the country, marking one of Beijing’s most comprehensive efforts yet to rein in the fast-growing technology.
Under the proposed framework, AI developers will be required to put in place strict safeguards for minors. These include personalised user settings, limits on usage time, and mandatory consent from guardians before providing emotional companionship services to children.
The CAC also said chatbot operators must ensure that a human takes over any conversation involving suicide or self-harm and immediately notify a user’s guardian or designated emergency contact. In addition, AI systems must not generate content that promotes gambling or encourages violent behaviour.
Beyond child protection, the draft regulations reinforce long-standing content controls. AI providers must ensure their services do not produce or distribute material deemed to endanger national security, undermine national unity, or damage China’s national honour and interests.
While tightening oversight, the regulator stressed that it continues to encourage the development and adoption of AI, particularly in areas such as promoting local culture and providing companionship tools for the elderly—so long as the technology is safe, reliable and responsibly deployed. The CAC has invited public feedback on the proposals before they are finalised.
The move comes as China’s AI sector experiences rapid growth. Domestic AI firm DeepSeek drew global attention earlier this year after topping app download charts, while startups Z.ai and Minimax—together boasting tens of millions of users—recently announced plans to list on the stock market. Many users increasingly turn to chatbots for companionship or informal therapy, intensifying concerns about their influence on human behaviour.
Globally, the impact of AI on mental health and safety has come under growing scrutiny. Sam Altman, chief executive of ChatGPT-maker OpenAI, has described responses to conversations involving self-harm as one of the most challenging issues facing AI developers. In August, OpenAI faced a lawsuit in California over the death of a 16-year-old boy, marking the first legal action accusing the company of wrongful death linked to chatbot interactions.
This month, OpenAI also advertised for a “head of preparedness” role focused on identifying and mitigating risks posed by AI to mental health and cybersecurity, underscoring the mounting pressure on technology firms worldwide to address the unintended consequences of increasingly human-like machines.
China’s proposed rules signal a clear intention to balance innovation with tighter safeguards, as governments race to regulate AI technologies that are rapidly reshaping daily life.
Melissa Enoch
Follow us on:
