
The idea that the U.S. shouldn’t prioritize AI safety because China doesn’t is a flawed premise influencing U.S. policy and tech circles. This thinking provides a convenient excuse for a dangerous race where Washington prioritizes outpacing Beijing in AI development over safety.
The reasoning is that regulating AI would hinder the U.S. in the “AI race.” It assumes China isn’t focused on safety, so the U.S. should aggressively push forward, even if it means taking risks. This idea is not only inaccurate but also perilous.
Ironically, China’s approach to AI offers a valuable lesson: true progress requires control. Ding Xuexiang, a top Chinese tech official, stated at Davos in January 2025 that confident acceleration is impossible without a reliable braking system. Chinese leaders view safety as essential, not restrictive.
AI safety is a significant political issue in China. In April, President Xi Jinping led a Politburo meeting focused on AI, highlighting “unprecedented” risks. China’s national security strategy now includes AI safety alongside threats like pandemics and cyberattacks. Regulations mandate pre-deployment safety assessments for generative AI, and numerous non-compliant AI products have been removed from the market. In the first half of this year alone, China has introduced more national AI standards than in the previous three years combined. Additionally, the number of technical papers on advanced AI safety has significantly increased in China over the past year.
However, the last time U.S. and Chinese leaders discussed AI risks was in 2023. In September, officials from both countries suggested resuming discussions “at an appropriate time,” but no meeting occurred under the Biden Administration. It’s uncertain whether the Trump Administration will continue these efforts, which is a missed opportunity.
China is open to collaboration, initiating a bilateral AI dialogue with the United Kingdom in May 2025. Chinese scientists have actively contributed to international initiatives, like the AI Safety Institute’s evaluations supported by 33 countries and intergovernmental bodies (including the U.S. and China) and the development of Global AI Safety Research Priorities.
A crucial first step is to reactivate the suspended U.S.-China dialogue on AI risks. Without official communication channels, effective coordination is unlikely. China expressed willingness to continue discussions as the Biden Administration concluded. Prior talks led to a modest but symbolically important agreement: both nations affirmed that human control must be maintained over nuclear weapons. This channel holds potential for further advancement.
Future discussions should concentrate on shared, high-stakes threats. OpenAI’s recent assessment of its latest ChatGPT Agent indicates it has surpassed the “High Capability” threshold in the biological domain based on the company’s standards. This suggests the agent could potentially assist users in creating dangerous biological weapons. Both the U.S. and China have a vested interest in preventing non-state actors from weaponizing AI. An AI-driven pandemic would disregard national boundaries. Furthermore, leading experts and Turing Award recipients from both the West and China have voiced concerns about advanced general-purpose AI systems potentially operating beyond human control, presenting catastrophic and existential risks.
Both governments have already recognized some of these threats. President Trump’s strategy cautions that AI may “pose novel national security risks in the near future,” specifically in cybersecurity and in chemical, biological, radiological, and nuclear (CBRN) domains. Similarly, last September, China’s main AI security standards body emphasized the necessity for AI safety standards to address cybersecurity, CBRN, and loss of control risks.
From there, the two sides could take practical steps to build technical trust between leading standards organizations—such as China’s National Information Security Standardization Technical Committee () and the America’s National Institute of Standards and Technology ()
Additionally, industry organizations like the AI Industry Alliance of China (AIIA) and the Frontier Model Forum in the US could exchange best practices regarding risk management frameworks. The AIIA has created “self-discipline conventions” that most leading Chinese developers have endorsed. A new Chinese whitepaper, specifically addressing advanced risks like cyber misuse, biological misuse, large-scale manipulation, and loss of control, was released during the World Artificial Intelligence Conference (WAIC) and can aid in alignment between the two countries.
As trust strengthens, governments and leading labs could start sharing safety evaluation methods and results for the most sophisticated models. The Shanghai Initiative, revealed at WAIC, explicitly calls for developing “mutually recognized safety evaluation platforms.” According to an Anthropic co-founder, a recent Chinese AI safety evaluation shares similar findings with Western assessments: advanced AI systems pose some CBRN risks and are beginning to show early signs of autonomous self-replication and deception. A common understanding of model vulnerabilities—and how those vulnerabilities are being tested—would form the basis for broader safety collaboration.
Finally, the two sides could create incident-reporting channels and emergency response protocols. In the event of AI-related accidents or misuse, prompt and transparent communication is crucial. Establishing modern “hotlines” between top AI officials could ensure real-time alerts when models exceed safety limits or act unexpectedly. In April, President Xi Jinping stressed the need for “monitoring, early risk warning and emergency response” in AI. After any hazardous incident, a pre-arranged response plan should be in place.
Engagement will present challenges—political and technical obstacles are inevitable. However, AI risks are global, demanding a global governance approach. Instead of using China as an excuse for domestic inaction on AI regulation, American policymakers and industry leaders should engage directly. AI risks won’t wait.
“`