What Changes When AI Goes to War
Speed, saturation and the dissipation of command
AI changes war by making speed, distribution, and information decisive - and everything we believed about control, deterrence, and human judgment breaks as a result.
Humans have always been the bottlenecks in war — in terms of limited numbers of soldiers and pilots, the need to deal with the fog of war and the limited data about realtime on-ground activity, and the physical constraints - sleep, fatigue, reaction time, and the need for food and armaments that needs a supply chain. These constraints defined the ability to win wars and everything from guns to satellite imagery, airplanes, long range missiles and nuclear weapons have tried to address these constraints.
Decision-making had friction, doubt and incomplete information. Time mattered. Distance mattered. You couldn’t act everywhere at once, you couldn’t see everything at once, and you couldn’t respond instantly. Even when weapons became more destructive, war was still paced by humans.
For decades, military power accumulated around large, expensive, centralized assets: the Economist pointed out that Aircraft Carriers are big, expensive, vulnerable and popular. Along with aircraft carriers, fighter jets and missile systems enabled scarcity to ensure dominance. They also required long , complex and often politics-driven procurement cycles, specially trained personnel, and elaborate command structures. They also created long term dependencies. Warfare was about protecting these platforms while using them to project force.
These systems also make war slower, more deliberate and more controllable. Limited visibility, incomplete information, and delayed intelligence meant that uncertainty slowed decisions. Leaders had to deliberate, debate, and infer intent. As Graham Allison’s Essence of Decision shows, decisions were filtered through organizations, politics, and bounded rationality. Strategy evolves under uncertainty, and with the thinning or lifting of the fog of war.
Uncertainty is a key constraint. People like Stanislav Petrov, a human in the loop before that became a thing, have protected us from nuclear war in the past.
What happens when inferred intelligence replaces human decisions? What happens when the pace of decision making outpaces human intervention? When escalation can no longer be debated, paused or signaled, but is decided nevertheless? A recent talk at the GSF Spring Summit by Ashish Taneja, founding partner of VC Fund GrowX Ventures, sent me down a rabbit hole of look at AI and war.
Why the old assumptions no longer hold
Elsa B. Kania wrote in Brookings in 2020:
As early as 2011, the PLA’s official dictionary included a definition of an “AI weapon” (人工智能武器), characterized as “a weapon that utilizes AI to pursue, distinguish, and destroy enemy targets automatically; often composed of information collection and management systems, knowledge base systems, decision assistance systems, mission implementation systems, etc.”
So what changes when AI is used in war?
The first assumption that collapses is speed.
In a recent talk at the GSF Spring Summit, Ashish Taneja, founding partner of VC Fund GrowX Ventures, pointed out that:
“The fastest drone today is probably at a 600, 700 kilometer per hour sort of a speed. You know, these are recreation drones, but it’s a matter of time [that] enemies or us will be using this in warfare. Let’s say 600 kilometers an hour, we are able to detect them today at a three-kilometer range and a five-kilometer range.”
“You won’t be able to see it from the naked eye. You need an element of intelligence to kind of pick it up. You spot them five, five kilometers there. They’re traveling at the 600 kilometer speed, and within seconds they’re in front of you.
AI also compresses decision-making because you need to act fast.
From China’s military decision-making in Times of Crisis and Conflict
“AI will shorten the OODA loop (observe-orient-decide-act), raise situational awareness, and assist commanders in formulating judgments, planning missions, generating action plans, controlling operations, and making decisions.”
“The foremost traits of intelligentized warfare include severely compressed combat duration, transparent battlefields, human-machine joint decision-making, autonomous weapons, and intelligent support for combat systems.”
It increases saturation because it can be comprehensive and overwhelming: there will be so many drones in the sky that traditional systems won’t know how to deal with them.
For the same reason, at some point in time, the human in the loop is also going to be a myth. From China’s military decision-making in Times of Crisis and Conflict
“Generally speaking, Chinese military thinkers envision future wars as conflicts between unmanned weapon systems operating autonomously with limited interference from human operators.”
“PLA scholars at the Army Command College Combat Laboratory envision humans taking the lead in decision-making at the strategic level of war, humans and machines sharing equal responsibilities in campaign decision-making, and machines autonomously making decisions at the tactical level.”
So what is the role of a human being when it decides slower than a machine does? Kanya says that “operational expediency concerns could supersede safety if having a human in the loop became a liability”
As a result, response decision control shifts to AI by default. Intelligentized warfare is upon us. From Jamestown and Reuters, in China:
“The PLA’s use of DeepSeek is part of a push to anchor the next phase of “intelligentized warfare” on domestically controlled, low-cost AI infrastructure. Across official and academic publications, PLA experts describe DeepSeek not as a single product but as an evolving system architecture that combines a large-scale reasoning core with modular and domain-specific layers. They envision integrating this system across the PLA’s entire command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR) chain.”
“DeepSeek-related procurement notices have accelerated throughout 2025, with new military applications appearing regularly on the PLA network, according to Jamestown. DeepSeek’s popularity with the PLA also reflects China’s pursuit of what Beijing calls “algorithmic sovereignty” - reducing dependence on Western technology while strengthening control over critical digital infrastructure.
The US Department of War wants a 30 day deployment cycles for AI:
“The Department cannot be working off models that are months or years old. We must have the latest and greatest AI models deployed for our warfighters. Deploying these capabilities across all echelons is simply not enough, we must be able to support and sustain rapid model updates across all echelons. I direct CDAO to establish a delivery and integration cadence with AI vendors that enables the latest models to be deployed within 30 days of public release. This shall be a primary procurement criterion for future model acquisition.”
The second constraint to collapse is that of scarcity of force
Taneja explained:
“[There are] enough examples in the recent past where, people are using saturation as a strategy. You know, put volumes in, and a lot of these things are getting sharper”…”these things, are small, they’re compact, they’re doing, something to conflict what probably smartphones did to media.”
What China’s competitive advantage in manufacturing and technology brings here is the ability to overwhelm traditional systems.
Reuters also reported:
China is looking at AI-powered robot dogs that scout in packs and drone swarms that autonomously track targets, as well as visually-immersive command centres and advanced war game simulations
Last year, China set a record with almost 16000 thousand synchronised drones in the sky, with each following a programmed flight path to create towers, blossoms, and a glowing “Sky Tree.” TechRadar also points out that “each drone’s movements were guided through RTK positioning and mesh networking, with updates sent in real time to maintain precision”, but warns that “such shows can fail, as seen in a previous Liuyang event where malfunctioning drones caught fire and fell toward the crowd.”
They’re clearly not there yet, but we need to understand what happens when it does get there. Drones do not need sleep, they do not feel fatigue, there might be some limitations regarding distance, but much less that of human beings, but they also don’t need a supply chain and can operate 24x7, battery life permitting. Drones can, with access to metals, chips and supply chain permitting, be manufactured in the hundreds of thousands.
We’re back to the era predating airplanes because it is now about throwing more bodies at the problem. With drones we get saturation as a strategy in the sky, as Taneja pointed out: Thousands of drones converging on a single target, with traditional air defense systems built largely for rare and expensive threats (airplanes or for a limited number of drones) can be overwhelmed.
The third thing that collapses is the cost of force:
From Prospect Magazine:
The war in Ukraine, meanwhile, has shown what can be done with cheap drone technology. Workshops behind the Ukrainian front line build more than 100,000 every month. At first, grenades were attached to off-the-shelf models to target Russian troops. But Ukraine has since developed drones that can carry 5kg warheads capable of taking out tanks. They cost less than £1,000 to produce. The tanks cost several million. Sea drones, costing around £200,000 each, have sunk Russian battleships worth billions.
I’ve been following news regarding the rebel forces using drones in Myanmar. This is decidedly hobbyist (and fascinating), as Geopolitical Monitor writes:
Using a combination of commercial drones, plans downloaded from the Internet, and YouTube tutorials, they were able to manufacture surveillance and combat drones that ultimately turned the tide of battle.
“We’re all gamers,” laughed 3D, explaining that his team, all at least five years younger than himself, were well-versed in internet research and tech-savvy problem-solvers who enjoyed developing new technologies. “We are collecting resources from all over the internet,” 3D explained. “And we develop our designs. In some cases, we copy some of the ready-made ones.” They also use 3D printers to make components, as it can take up to five months to receive replacement parts, which must be transported through the jungle from Thailand or China.
…
These drones have a range of up to 5 km and fly at an altitude of 800 to 1,000 meters. “This helps avoid obstacles such as trees or power lines and ensures the drones are less vulnerable to being shot down,” he explained.
For the KNDF, drones are a crucial asset. “It’s a game-changing weapon. We don’t have the fighter jet, we don’t have the helicopter, so we rely on the drone for our airstrikes,” he said.
Of course, Drones are susceptible to jamming (if you’ve played Starcraft, you would have heard of EMP Shockwaves), but “even with the jammers in place, if enough drones are deployed, using differing signals, some will still succeed.”
What this indicates is that the system assumes failure, and loss of drones is a design assumption. This makes loss of drones tolerable. That’s the structural shift. Both weapons and decision making can speed up, even if accuracy gets compromised. Cost enables this. Taneja pointed out:
“…drones which are $10, $20, $100, $200. They’re doing damage to assets worth millions of dollars, and they’re doing work which traditionally those millions of dollars worth of assets were doing. So wars are getting asymmetrical.”
The fourth impact is the collapse of the fog of war
There are around 10,000 satellites in space, Taneja said, about 85-90% of which are US and China managed or controlled. India has nine. He pointed out why this is important:
“Space today is becoming infrastructure. I can’t see what’s behind this particular wall, but there are assets in space that allow me to observe what’s happening in neighboring world or my enemy territories or the zones I want to keep track of”
“From decades ago where space was more exploration based, today it’s more around intelligence, right?”
If you have enough satellites in space, space enables continuous observation. This means that intelligence becomes persistent, and not episodic. He explains:
“…that AI layer is allowing you to move from the pixels to actual decision making. Where is that truck? Where is the tank? How do I evade? What’s the activity on the border which is happening? Where is the potential conflict happening?
“One of our other portfolio companies in the RF analytic space is picking up signals from the ground and figuring out where the action is, all the communication chatter. Now, you fuse all these different data points together, right? You know, you’ve got your EO, SAR and RF. The amount of intelligence and intent which you will get is very different. All these pixels are moving into decisions, and assets in space are enabling that.”
The fifth assumption to collapse is the “pause/reset” approach:
The old assumption was that war is episodic: battle, attack, regroup, re-supply, evaluate. As information is no longer episodic, neither is decision making. You have rapid iteration, autonomous decisions, and weapons not limited by physical limitations.
From China’s military decision-making in Times of Crisis and Conflict:
Speed refers to an unmanned system’s ability to quickly enter the battlefield and establish superiority. Precision refers to an intelligent system’s ability to see through the fog of war and formulate decisions that will allow precise strikes on enemy targets. Comprehensiveness refers to the ability of intelligent systems to simultaneously address threats in all domains of war, both real and virtual. Depth refers to understanding enemy weaknesses from every dimension and organizing intelligent unmanned attacks accordingly. Constancy refers to the replacement of human operators by machines that can continuously operate far beyond human physiological limits.
When sensing is continuous, targeting is continuous. War becomes always on: sensing, assessing, predicting, responding. In other words, once AI systems operate continuously across sensing, decision-making, and strike functions, war no longer pauses between engagements. It persists. We already see this in Ukraine.
In Roles and Implications of AI in the Russian-Ukrainian Conflict, Samuel Bendett writes:
Artificial Intelligence is therefore used for data analysis to aid Ukrainian decision-making. A key role of AI in Ukraine’s service is the integration of target and object recognition with satellite imagery, prompting Western commentators to note that Ukraine has an edge in geospatial intelligence. AI is used to geolocate and analyze open-source data such as social media content to identify Russian soldiers, weapons, systems, units or their movements. According to public sources, neural networks are used to combine ground-level photos, video footage from numerous drones and UAVs, and satellite imagery to provide faster intelligence analysis and assessment to produce strategic and tactical intelligence advantages.
The best made plans no longer work when the other side has ability to modify decisions autonomously.
Therefore, there’s no room for bureaucratic pause. The US Department of War memo acknowledges this shift, including the need for data sharing, and changing approaches to risk tradeoffs:
“Wartime Approach to Blockers. We must eliminate blockers to data sharing, Authorizations to Operate (ATOs), test and evaluation and certification, contracting, hiring and talent management, and other policies that inhibit rapid experimentation and fielding. We must approach risk tradeoffs, ‘equities’, and other subjective questions as if we were at war. To this end, I expect our CDAO to act as a Wartime CDAO and work with the Chief Information Officer to fully leverage statutory and delegated authorities to accelerate AI capability delivery, including cross-domain data access and rapid ATO reciprocity on behalf of pace-pushing leaders across the Department.”
Six, decisions are no longer clear:
while space might make on-ground activity transparent, we now have opacity in decision making. It’s difficult to identify what triggered a response in a multi-step decision making model. How do memory, training data, inferences and prediction modeling impact decisions? How does the invisibility of the autonomous decision making process impact our confidence in deploying them? Remember, this is not a decision about whether a robo-trading tool should short a share: lives are at stake here.
The problem is that human hesitation can now be seen as a strategic risk, and the system has to be optimised to tolerate false positives over delayed responses.
From China’s military decision-making in Times of Crisis and Conflict:
“Those machines with superior algorithms, data, and cognitive abilities will more wisely predict battlefield developments and produce a finer course of action.”
Ashish Taneja said something similar in his talk:
“There’s compute available, there’s data available, but what should I optimize for? Should I create a perfect model or should I create the fastest model? Especially in wartime, it’s more speed, less perfection.”
[Pointing to some recent cases, he said “There were delays of up to two hours, four hours, six hours, eight hours. You know, wartime scenario, just imagine the damage two hours, four hours, six hours can do.”
In a nutshell: action cannot be delayed by uncertainty.
The US Department of War acknowledges this:
“Speed Wins. We must internalize that Military AI is going to be a race for the foreseeable future, and therefore speed wins. We must weaponize learning speed, and measure and manage cycle time and adoption rates as decisive variables in the AI era. We must accept that the risks of not moving fast enough outweigh the risks of imperfect alignment. I direct CDAO to establish deployment velocity and operational cycle-time metrics for all PSPs, to be a focus of their monthly reporting to the Deputy Secretary and USW(R&E).”
As I wrote in When AI buys or sells for you, when
“both sides can process far more data faster than humans ever could. They have infinite time/speed at their disposal”.
Seven, escalation and de-escalation may no longer be in our hands:
deploying autonomous decision making tools leaves room for hallucination and unmitigated escalation. When systems are optimised for speed over precision, this is bound to happen. How will hallucinations be prevented? How can deescalation be orchestrated?
We lose something else: I’m not sure of how political signaling would work or help here. Earlier, strategic restraint could be demonstrated, and red lines could be communicated in human-driven conflict. Someone could go by gut and decide not to press the nuclear button. How will automated decision making impact this? Will they be able to read the signals? Can they be optimised for doubt?
Kania agrees:
The advent of AI/ML systems and greater autonomy in defense will impact deterrence and future warfighting among great powers. This military-technological competition could present new threats to strategic stability…
Related to this is a very significant assumption that falls, that Command is Control:
It’s hard to tell whether the weapon in play over here is the drone, the satellite or the autonomous system. It’s probably a combination of the three, because none can perform well without the other: the competitive advantage is orchestration. That data when supplied to AI aids autonomous decision making that can be used to orchestrate drones. Those with data for better autonomous action, and the drones and AI to match, will have a significant advantage in war.
The US Department of War memo highlights the need for speed in deployment, development and experimentation, and also the need for data:
“Competition > Centralized Planning. As America’s AI ecosystem demonstrates, robust competition by small teams, with transparent metrics for results, is the engine of commercial AI leadership. We must bring this model into the Department and encourage robust competition to spur faster military AI integration. Small, accountable teams will win over process in a race characterized by dynamic and unpredictable innovation. We will measure success through continuous field experimentation: putting AI capabilities in operators’ hands, gathering feedback within days not years, and pushing updates faster than the enemy can adapt.”
“Data Access. I direct the CDAO to enforce, and all DoW Components to comply with, the ‘DoD Data Decrees’ to further unlock our data for AI exploitation and mission advantage… The CDAO is authorized to direct release of any DoW data to cleared users with valid purpose, consistent with security guidelines… Our data advantage is meaningless if our developers and operators cannot exploit it.”
Kania points out
The PLA is actively pursuing AI-enabled systems and autonomous capabilities across services and for all domains of warfare… integrating these capabilities across command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR)… enabling coordination among unmanned systems and decision-support architectures.
When AI goes to war, the role of humans becomes even more important
As autonomy becomes key, control is now architectural, decisions are iterative, and not command driven.
A fault line that emerges is that militaries are built to think in terms of hardware and assets, not in terms of continuously learning systems whose behaviour cannot be fully specified in advance. If warfare becomes a contest between adaptive architectures, and decisions have be made before humans can parse the data in front of them, then deterrence, control, and ethics must also be rethought. Old assumptions about escalation, restraint, and human oversight don’t find a place easily in this emerging construct.
The advent of AI is shifting control over war to distributed learning architecture. The role of humans in war is going to be limited: command is no longer control, and someone’s gut is not going to determine whether a trigger gets pulled, or a system stands down. Because of this, the role of human beings, when it comes to autonomous orchestration of war is going to be even more important: in architecting the systems.
Who decides the objectives, the thresholds, the tolerance for error, and the rules under which machines escalate and deescalate. That is the new Essence of Decision.



