Chinese Comprehensive AI Rules, Draft Regulation, Dec 2025

1/1/20262 min read

In late December 2025, China's Cyberspace Administration of China (CAC) published a draft regulation titled "Interim Measures for the Management of Anthropomorphic Interactive Services Using Artificial Intelligence." This proposal introduces a comprehensive set of rules—spanning approximately 29 articles—to govern AI systems, including companion chatbots and humanoid robots, that simulate human traits and engage users emotionally. The rules aim to promote safe innovation while preventing psychological harm, addiction, data misuse, and ideological risks, all aligned with "core socialist values."The full list of key rules from the draft includes:

• AI use cases must align with approved social values and be demonstrably safe.

• AI must not generate illegal, harmful, manipulative, deceptive, violent, addictive, or socially destabilizing content.

• AI systems must incorporate ethics reviews, security controls, content governance, and risk management mechanisms.

• AI must not replace real social interaction or intentionally induce addiction or psychological dependency.

• AI training data must be legal, traceable, secure, resistant to poisoning, and aligned with approved ideological and cultural standards.

• AI must detect user emotional distress, intervene appropriately, and transfer to human operators in high-risk situations.

• AI interacting with minors must use a protected mode, require guardian consent, provide alerts, and impose usage limits.

• AI interacting with elderly users must support emergency contacts and prohibit simulating relatives or specific family members.

• AI must protect user interaction data and restrict third-party disclosure or sharing.

• AI must not reuse user interaction data or sensitive personal information for model training without explicit consent.

• AI must clearly and repeatedly disclose that it is not human.

• AI must prompt users to take breaks after prolonged use (e.g., pop-ups after 2 hours of continuous interaction).

• AI must provide easy and immediate exit options without obstructing disengagement.

• AI providers must responsibly handle service shutdowns or changes with advance notice and user mitigation.

• AI services must offer accessible complaint and feedback mechanisms.

• Large-scale or high-risk AI systems must undergo formal government-supervised security assessments.

• Security assessments must evaluate user safeguards, emergency protocols, and risk remediation.

• AI services must restrict, suspend, or terminate operations if serious risks to users or society arise.

• Distribution platforms must enforce compliance and remove non-compliant services.

• AI algorithms must be registered and reviewed under national rules.

• Providers must submit to audits, reassessments, and inspections.

• Innovation and testing should occur in government-supervised regulatory sandboxes.

• Providers must comply with regulatory orders, corrections, and risk mitigation.

• Violations face penalties, including service suspension.

The draft remains open for public feedback until January 25, 2026, with final rules expected afterward.Citation: John Koetsier, "China Just Reinvented Asimov’s 3 Laws Of Robotics," Forbes, December 29, 2025. www.forbes.com(summarizing the CAC draft released December 27, 2025).