15 Things You Must Know About AI Governance in China
Author: Oliver Patel
China’s AI strategy
- China’s ambition is to be the global AI leader by 2030. DeepSeek’s progress should come as no surprise. China has made no secret about its aspirations of global technological leadership in AI. The ‘New Generation AI Development Plan’, published by China’s State Council in 2017, states that “by 2030, China’s AI theories, technologies, and applications should achieve world-leading levels, making China the world’s primary AI innovation center”.
- The government strongly supports open source AI. Promoting an open innovation ecosystem is a core part of how China aims to achieve their strategic ambition for AI leadership by 2030. The 2017 New Generation AI Development Plan states that the government should “encourage AI enterprises and research institutions to build open source platforms for public open AI research and development”. Leading Chinese tech companies frequently release open source models (like DeepSeek’s models and Alibaba’s Qwen model series), with support, fanfare and financial backing from the state’s media and institutions. Only time will tell, but perhaps the early success of DeepSeek is vindicating this approach.
AI regulatory landscape
- China has adopted several national AI governance laws. Alongside the EU, China has perhaps adopted more AI governance specific laws, which apply across its territory, than any other nation (although, to be fair, this depends on the definition of an ‘AI governance law’). Whilst there is no comprehensive law like the EU AI Act, China has adopted 3 important regulations:
- Interim Administrative Measures for Generative AI Services [entered into force August 2023];
- Regulations on the Administration of Deep Synthesis Internet Information Technology [entered into force January 2023]; and
- Internet Information Service Algorithmic Recommendation Management Provisions [entered into force March 2022].
- There are strict prohibitions on creating and sharing deepfakes. The Deep Synthesis regulations govern the development and use of AI to create and disseminate images, audio, video and other content produced by generative AI systems. This law was issued in November 2022, amid increasing fears about the use of AI to spread misinformation and disinformation. Key requirements include registering deep synthesis algorithms in China’s central algorithm registry and not using deep synthesis services to produce, publish or transmit “fake news information” or information which is “prohibited, harms the image of the nation or harms the societal public interest”.
- The Algorithmic Recommendation Law governs online content. The first AI governance specific law adopted in China focuses on how algorithms disseminate and recommend online content, as well how AI is used to manage workers and e-commerce platforms. Again, AI models in scope must be registered in the central algorithm registry. Furthermore, users must be notified about recommendation algorithms and they must also be able to ‘opt out’ and turn off personalised recommendations.
- The Generative AI Services Law requires model evaluations. Perhaps the most consequential AI law for global companies operating in China is the Interim Administrative Measures for Generative AI Services. The focus of this law is on providers of “public-facing” generative AI services. It is relevant for companies both developing and using generative AI. For example, public generative AI services must “uphold the Core Socialist Values” and “use data and foundational models that have lawful sources”. In practice, this means that many AI models developed by Western organisations, and trained on information sources which are prohibited in China, cannot be made available to the public. Other requirements include conducting robust security and safety evaluations prior to releasing new models. Generative AI services and models which are not public or consumer facing, such as those used solely internally within companies, or for research and development activities, are not in scope.
- 300+ generative AI systems have been approved for public use. Under the Generative AI Services Law, Generative AI systems must be approved by the Cyberspace Administration of China (CAC), before they can be released to the public. According to Concorida AI, 302 such systems have been approved for public use, as of 27th January 2025. This is a rigorous process, which involves companies providing the government with direct access to the model, to facilitate comprehensive tested, evaluated and assessed.
- DeepSeek, ByteDance, Baidu, Alibaba and Tencent are among the key players. There is now a flourishing and deep ecosystem of Chinese tech companies, which are developing and releasing models that, in some cases, rival those released by U.S. tech giants. The whole world now knows about DeepSeek and its V3 and R1 models. However, this is just the tip of the iceberg. For example, Baidu’s Ernie Bot, the closest domestic rival to ChatGPT, passed over 200 million users last year. And Tencent’s Hunyuan models are integrated into hundreds of applications, including WeChat. Despite the progress, it is fair to say that these companies have disclosed less about their approach to AI governance and safety than their Western counterparts.
International AI governance
- China has endorsed important global AI governance initiatives. Despite forging a unique and distinct policy stance on AI, China has endorsed and signed up to various international AI governance initiatives. For example, China attended the UK-hosted AI Safety Summit in 2023, and signed the ‘Bletchley Declaration’, which calls for “increased transparency by private actors developing frontier AI capabilities”. China also backed the United Nations General Assembly resolution on ‘Safe, Secure and Trustworthy AI for Sustainable Development’, alongside over 120 other countries.
- President Xi supports an international AI governance institution. At the Third Belt and Road Forum for International Cooperation, which was held in October 2023, President Xi Jinping announced the launch of the ‘Global AI Governance Initiative’ in his opening speech. As part of this initiative, China signalled its support for “the United Nations framework to establish an international institution to govern AI, and to coordinate efforts to address major issues concerning international AI development, security, and governance”.
AI safety and standardisation
- The CCP considers AI as a significant public security risk. The Third Plenum is one of the most significant events in the Chinese political calendar. It is the third plenary session during each five-year political cycle, and it is the forum in which the Central Committee of the Chinese Communist Party (CCP) focuses on long-term economic and social policy and reforms. A CCP resolution adopted at the July 2024 Third Plenum states that China will “institute oversight systems to ensure the safety of AI”. According to Matt Sheehan, it is telling that the call for AI safety oversight is contained in a broader section of the resolution on national and public security risks. However, we do not yet know much about exactly what these risks are considered to be.
- China has ambitions to release 50+ AI standards by 2026. China’s ascendance in technical standards development was one of my key themes for AI governance in 2024. China’s stated goal is to be an AI standard setter, not a standard taker, and they are pursuing this in a focused way. The National Information Security Standardisation Technical Committee (TC260) has been extremely active, announcing plans for 50+ AI standards by 2026 and releasing several last year, including on generative AI safety.
- Generative AI and AI Safety standards have already been published. In 2024, two consequential AI governance standards were published:
- China’s TC260 released the Technical Document on Basic Safety Requirements for Generative Artificial Intelligence Services, to support regulatory compliance and generative AI model approvals.
- China’s TC260 also published the AI Safety Governance Framework.
Looking ahead
- Comprehensive national AI legislation is on the horizon. There have been various reports and hints that China is working on comprehensive, national AI legislation. For example, in May 2024, China’s State Council announced that an AI law was “under preparation”. Since then, additional details have been sparse.
- China believes that failing to develop (AI) is the greatest threat. China has long viewed economic development and technological innovation as central to its long term interests and global ascendance. Matt Sheehan notes that the phrase “failing to develop is the greatest threat to national security” is often repeated by Chinese politicians, scholars and advisors in AI policy debates. Given that 74% of global generative AI patents are filed in China, it seems like this is being taken seriously. However, as this article has demonstrated, so is AI governance and safety.
Please click on this link to read the full article.
Image credit: Image by masadepan on Freepik