How Some of China’s Top AI Thinkers Built Their Own AI Safety Institute
The article is written by Scott Singer, Karson Elmgren, and Oliver Guest for Carnigie Endowment for International Peace.
Since the January 2025 release of the DeepSeek-R1 open-source reasoning model, China has increasingly prioritized leveraging artificial intelligence (AI) as a key engine for economic growth, encouraged AI diffusion domestically, and continued to pursue self-sufficiency across the AI stack. Yet while China has been investing heavily in AI development and deployment, it has also begun to talk more concretely about catastrophic risks from frontier AI and the need for international coordination. The February 2025 launch of the China AI Safety and Development Association (CnAISDA, 中国人工智能发展与安全研究网络)—China’s self-described counterpart to the AI safety institutes (AISIs) that the United Kingdom, United States, and other countries have launched over the last two years—offers a critical data point on the state of China’s rapidly evolving AI safety conversation.
Despite its potential importance, little has been publicly reported on CnAISDA. What is it? How did it come about? And what does it signal about the direction of Chinese AI policy more broadly? This paper provides the first comprehensive analysis of these questions.
Please click here to read the full article.
Image credit: Image by rawpixel.com on Freepik