6 min read

The weapons of the next world war won’t just be missiles and nukes. They’ll be algorithms. And right now, no one in power fully understands what they’ve built — or what it’s about to do to global stability.

A recent roundtable published by Texas National Security Review tackled exactly this: Artificial Intelligence and the Future of Strategic Stability. The researchers involved aren’t doomers or tech evangelists. They’re serious people asking a serious question — does AI make nuclear war more or less likely? The answer, predictably, is: it depends. And that non-answer should terrify you.

The Old Rules Don’t Apply Anymore

Cold War deterrence was brutal but legible. Two superpowers. Mutually assured destruction. Red phones. Slow-moving decisions with enough friction to prevent accidents. Everyone understood the game board.

Enjoying this story?

Get sharp tech takes like this twice a week, free.

Subscribe Free →

AI breaks that board into pieces.

Machine-speed decision-making means a miscalculation can escalate before any human picks up a phone. Autonomous systems can misread sensor data. Early warning networks, already prone to false alarms, now get fed through layers of pattern recognition that nobody fully audits. The 1983 Soviet nuclear false alarm — where one officer named Stanislav Petrov refused to launch based on gut instinct — wouldn’t play out the same way if an algorithm was making that call.

And it’s not just the US and Russia anymore. China is building its own AI-integrated military infrastructure at a pace that would have seemed fictional a decade ago. The triangle is three-cornered now, and triangles are inherently less stable than two-point standoffs.

Speed Is the Real Threat

Every military analyst worth listening to keeps circling back to the same word: compression. Time compression. The gap between “something happened” and “we must respond” is shrinking. Humans are increasingly advisory in that loop, not central to it.

That’s not science fiction. That’s procurement policy in 2024.

The US military is actively funding autonomous targeting systems. China’s People’s Liberation Army has published doctrine explicitly embracing “intelligentized warfare.” Russia, despite its battlefield struggles in Ukraine, is pouring research into AI-assisted nuclear command systems. Every major power is racing to build faster, smarter, more autonomous war machinery — and simultaneously insisting everyone else slow down.

The hypocrisy is almost funny. Almost.

The Trump administration’s vow to crack down on Chinese companies exploiting AI models made in the US is a perfect microcosm of this contradiction. Washington wants to dominate AI. Washington also wants to contain AI proliferation. You can’t do both forever. Eventually, the technology escapes the jar.

The Verification Problem Is Unsolvable (For Now)

Arms control worked — when it worked — because you could verify compliance. Inspectors could count warheads. Satellites could photograph missile silos. Treaties had teeth because violations were visible.

You cannot inspect an algorithm.

How do you verify that China’s AI-enabled early warning system has a “human in the loop”? How do you confirm that a US autonomous drone swarm won’t interpret certain radar signatures as an attack trigger? The opacity is structural. The black box problem isn’t just a Silicon Valley headache. It’s a geopolitical catastrophe waiting to unfold.

And unlike biological or chemical weapons — where the physical infrastructure leaves traces — AI capabilities live in data centers, in model weights, in code repositories that can be copied in seconds and hidden in plain sight. The entire verification architecture built since 1963 is quietly becoming obsolete.

The Hot Take

Arms control treaties negotiated between the US, China, and Russia on AI will fail. Not because the diplomats aren’t smart enough. Because the technology doesn’t respect the boundaries of nation-states. Private labs, rogue states, and non-state actors will fill every gap that state actors agree to leave empty. The honest conversation isn’t “how do we control AI in warfare” — it’s “how do we manage a world where that control was never really possible.”

What Gets Ignored in the Policy Papers

Most serious analysis of AI and strategic stability focuses on the big players. Understandably. But the second-order effects matter just as much.

Smaller states with access to advanced AI tools can now punch far above their traditional military weight. The barrier to building sophisticated intelligence, surveillance, and targeting capabilities has dropped dramatically. We’re already seeing it. How Chinese labs race for the next ‘first-in-class’ breakthrough isn’t just a story about pharmaceuticals — it’s a window into how quickly technological capability can shift regional power dynamics when the incentives align.

The inequality angle matters too. Just as life-extending treatments risk becoming a biological lottery for the wealthy few, advanced military AI is becoming a geopolitical lottery — available in full to powers with deep pockets, and in fragments to everyone else. That asymmetry breeds desperation. Desperate actors make dangerous decisions.

The Clock Is Running

The window to build meaningful international norms around AI and military decision-making is not permanently open. Every month that passes without serious multilateral frameworks is a month where defaults get set in code, in doctrine, in procurement contracts. Those defaults become very hard to undo. The conversation needs to happen faster, louder, and with far less deference to the defense contractors and tech firms who have enormous financial incentives to keep the guardrails vague. History doesn’t wait for consensus.


Watch the Breakdown

Charles is the founder of Everyday Teching and Town Talk App LLC. A tech enthusiast, entrepreneur, and contrarian thinker who believes most tech coverage is broken. Everyday Teching exists to fix that...

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments