US and China Decline to Join Global Military AI Principles at Spain Summit

The world’s two largest military powers have declined to join a new pledge aimed at curbing the risks of artificial intelligence on the battlefield, underscoring how strategic rivalry is outpacing efforts to set shared rules. At a high-profile summit in Spain, the United States and China stayed outside a joint declaration on responsible military AI, even as dozens of other governments endorsed new guardrails.

The decision leaves a gap at the center of emerging norms for autonomous weapons and decision-support systems, raising questions over how far voluntary commitments can go without buy-in from Washington and Beijing. It also highlights a widening split between countries that want binding limits on AI-enabled warfare and those determined to preserve maximum freedom to innovate.

The summit where consensus fell short

The latest push to shape military AI norms came at a gathering in A CORUNA, Spain, where officials met under the banner of the Responsible AI in the Military, or REAIM, summit. A total of 85 countries attended the event, but only about a third were willing to sign a declaration on how to govern the development and use of AI-enabled weapons and decision tools, according to detailed accounts of the talks. That gap between attendance and signatures captured the political hesitation that still surrounds hard constraints on cutting-edge defense technology.

Reports from the meeting say that only 35 countries out of the 85 present endorsed the text, a figure that organizers themselves highlighted as “Only 35 countries out of 85 attending” the Responsible AI in the Military forum, underlining how contested the issue remains even among partners and allies. The declaration’s backers framed it as a way to reduce the risk of accidents, miscalculation, or unintended escalation as militaries race to integrate AI into targeting, surveillance, and command systems, a concern that featured prominently in coverage of the summit’s outcome and in comments relayed via summit reports.

Why Washington and Beijing stayed out

Against that backdrop, the decision by the United States and China to abstain carried outsized weight. Both governments have poured resources into AI-enabled command, control, and weapons systems, and officials from each side have repeatedly warned that the other is moving very fast in this domain. In coverage of the summit, one account noted that By Victoria Waldersee described how U.S. and Chinese flags framed the debate, while another Quick Summary stressed that China and the United States chose not to align themselves with the new pledge despite their central role in global AI development.

Diplomats familiar with the talks, cited in detailed write-ups of the meeting, said Washington and Beijing were wary of language they feared could constrain future military options or be interpreted as a backdoor limit on autonomous weapons. One report, attributed to By Victoria Waldersee and relayed again via a separate link, emphasized that officials from both capitals argued they already follow responsible practices and prefer to shape norms through their own channels. A Quick Summary of the outcome likewise underscored that China and the United States abstained, highlighting how their absence overshadowed the commitments made by smaller states.

What the declaration actually promises

For the countries that did sign, the new text is meant to complement an earlier framework known as The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. That document, described in detail by the U.S. State Department as The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, lays out principles such as ensuring human responsibility for the use of force, testing AI systems under realistic conditions, and taking steps to reduce bias and unintended behavior in autonomous platforms. The latest summit declaration builds on those ideas, urging militaries to keep humans in the loop for critical decisions and to avoid deploying systems they cannot adequately understand or control.

According to officials who backed the text, the goal is not to freeze innovation but to channel it into safer applications, for example by requiring rigorous evaluation before AI is integrated into nuclear command and control or early warning systems. One detailed summary of the summit explained that Around a third of countries in CORUNA, Spain, agreed on Feb to a declaration that calls for safeguards on data quality, transparency about AI capabilities, and mechanisms to shut down malfunctioning systems, all aimed at preventing accidents and unintended escalation. Another account of the same process stressed that About a third of nations at the REAIM gathering were willing to sign, suggesting that even among like-minded states there is still debate over how far such voluntary limits should go, a point echoed in a separate description of the 85-country summit in A CORUNA, Spain.

Who signed on, and who is hedging

The roster of signatories reflects both enthusiasm and caution. European governments such as Britain and the Netherlands joined the declaration, as did Asian partners including South Korea and conflict-tested states such as Ukraine. One detailed account, relayed via a regional outlet and echoed in a WKZO summary, noted that Only 35 countries out of 85 attending the Responsible AI in the Milita forum signed, but highlighted that the Netherlands, South Korea and Ukraine were among those pushing for clearer rules. For governments facing immediate security threats, the argument is that guardrails can coexist with rapid modernization, and that clarity about red lines can actually strengthen deterrence.

Other states, including some close partners of Washington, appear to be hedging. A report from RBC-Ukraine framed the outcome as an “AI weapons pact” stalling after the United States and China refused to join, and stressed that Ukraine and other Ukrainian officials still see value in codifying principles even if the biggest powers hold back. Another analysis, carried in an Al Jazeera overview of the Russia-Ukraine war, noted that Only about a third of countries at the summit agreed to the declaration on AI, reinforcing the picture of a world where many governments are still weighing the trade-off between ethical commitments and perceived battlefield advantage. A separate RBC dispatch repeated that AI weapons pact stalls after US and China refuse to join, underscoring how the absence of the two largest players is shaping perceptions of the entire process.

The strategic and ethical stakes of an uneven pact

The uneven uptake of the declaration leaves a patchwork of norms at a time when military AI is spreading from labs into live operations. In my view, that creates a two-speed world: one group of states voluntarily constraining how they use autonomous systems, and another group, including the United States and China, reserving more room to experiment. Analysts quoted in summit coverage warned that this divergence could increase the risk of miscalculation if, for example, an AI-enabled early warning system misreads a missile test as an attack and there is no shared baseline for how such tools should be designed and overseen. One detailed Reuters-linked account stressed that the declaration’s backers see it as a way to reduce accidents, miscalculation or unintended escalation, but acknowledged that its impact will be limited if the largest militaries remain outside.

Leave a Reply

Your email address will not be published. Required fields are marked *