War is a fearsome accelerant of arms races. Before Russia invaded Ukraine two years ago, the ethics of using land mines and cluster munitions were the subject of heated debate, and many states had signed agreements not to use either. But once the desperate need to win takes over, governments can lose their qualms and embrace once-controversial technologies with gusto. For that same reason, the war between Russia and Ukraine has banished any misgivings either country might have had about military use of artificial intelligence. Each side is deploying millions of unmanned aerial vehicles, or UAVs, to conduct surveillance and attack enemy positions—and relying heavily on AI to direct their actions. Some of these drones come from small, simple kits that can be bought from civilian manufacturers; others are more advanced attack weapons. The latter category includes Iranian-built Shaheds, which the Russians have been using in great numbers during an offensive against Ukraine this winter. And the more drones a nation’s military deploys, the more human operators will struggle to oversee all of them.
The idea of letting computer algorithms control lethal weapons unsettles many people. Programming machines to decide when to fire on which targets could have horrifying consequences for noncombatants. It should prompt intense moral debate. In practice, though, war short-circuits these discussions. Ukraine and Russia alike desperately want to use AI to gain an edge over the opposite side. Other countries will likely make similar calculations, which is why the current conflict offers a preview of many future wars—including any that might erupt between the U.S. and China.
Before the Russian invasion, the Pentagon had long been keen to emphasize that it always planned to include humans in the decision loop before deadly weapons are used. But the ever-growing role of AI drones over and behind Russian and Ukrainian lines—along with rapid improvements in the accuracy and effectiveness of these weapons systems—suggests that military planners all around the world will get used to what once was deemed unthinkable.
Long before AI was ever deployed on battlefields, its potential use in war became a source of anxiety. In the hit 1983 film WarGames, Matthew Broderick and Ally Sheedy saved the world from AI-led nuclear destruction. In the movie, the U.S. military, worried that humans—compromised by their fickle emotions and annoying consciences—might not have the nerve to launch nuclear weapons if such an order ever came, had handed over control of the U.S. strategic nuclear arsenal to an artificially intelligent supercomputer called WOPR, short for War Operation Plan Response. Broderick’s character, a teenage computer hacker, had accidentally spoofed the system into thinking the U.S. was under attack when it wasn’t, and only human intervention succeeded in circumventing the system before the AI launched a retaliation that would destroy all life on the planet.
The debate over AI-controlled weapons moved along roughly the same lines over the next four decades. In February 2022—the same month that Russia launched its full-scale invasion—the Bulletin of the Atomic Scientists published an article titled “Giving an AI Control of Nuclear Weapons: What Could Possibly Go Wrong?” The answer to that question was: lots. “If artificial intelligences controlled nuclear weapons, all of us could be dead,” the author, Zachary Kallenborn, began. The fundamental risk was that AI could make mistakes because of flaws in its programming or in the data to which it was designed to react.
Yet for all the attention paid to nukes launched by a single godlike WOPR system, the real influence of AI lies, as the Russo-Ukrainian war shows, in the enabling of thousands of small, conventionally armed systems, each with its own programming that enables it to take on missions without a human guiding its path. For Ukrainians, one of the most dangerous Russian drones is the “kamikaze” Lancet-3, which is small, highly maneuverable, and hard to detect, much less shoot down. A Lancet costs about $35,000 but can damage battle tanks and other armored fighting vehicles that cost many millions of dollars apiece. “Drone technology often depends on the skills of the operator,” The Wall Street Journal reported in November in an article about Russia’s use of Lancets, but Russia is reportedly incorporating more AI technology to make these drones operate autonomously.
The AI in question is made possible only by the use of Western technologies that Russians are sneaking past sanctions with the help of outsiders. The target-detection technology reportedly allows a drone to sort through the shapes of vehicles and the like that it encounters on its flight. Once the AI identifies a shape as characteristic of a Ukrainian weapons system (for instance, a distinctive German-made Leopard battle tank), the drone’s computer can basically order the Lancet to attack that object, even possibly controlling the angle of attack to allow for the greatest possible damage.
In other words, every Lancet has its own WOPR on board.
In the AI race, the Ukrainians are also competing fiercely. Lieutenant General Ivan Gavrylyuk, the Ukrainian deputy defense minister, recently told a French legislative delegation about his country’s efforts to put AI systems into their French-built Caesar self-propelled artillery pieces. The AI, he explained, would speed up the process of identifying targets and then deciding the best type of ammunition to use against them. The time saved could make a life-and-death difference if Ukrainian artillery operators identify a Russian battery faster than the Russians can spot them. Moreover, this kind of AI-driven optimization can save a lot of firepower. Gavrylyuk estimated that AI could offer a 30 percent savings in ammunition used—which is a massive help for a country now being starved of ammunition by a feckless U.S. Congress.
The AI weaponry now in use by Ukraine and Russia is only a taste of what’s coming to battlefields around the world. The world’s two greatest military powers, China and the U.S., are undoubtedly trying to learn from what’s happening in the current war. In the past two years, the U.S. has been openly discussing one of its most ambitious AI-driven initiatives, the Replicator project. As Deputy Defense Secretary Kathleen Hicks explained at a news conference in September, Replicator is an attempt to use self-guided equipment to “help overcome China’s advantage in mass.” She painted a picture of a large number of autonomous vehicles and aerial drones accompanying U.S. soldiers into action, taking on many of the roles that used to be done by humans.
These AI-driven forces—perhaps solar-powered, to free them from the need to be refueled—could scout ahead of the Army, defend U.S. forces, and even deliver supplies. And although Hicks didn’t say so quite as openly, these drone forces could also attack enemy targets. The timeline that Hicks described in September was incredibly ambitious: She said she hoped Replicator would come online in some form within two years.
Programs such as Replicator will inevitably raise the question of even more seriously limiting the part humans will play in future combat. If the U.S. and China can construct thousands, and arguably millions, of AI-driven units capable of attacking, defending, scouting, and delivering supplies, what is the proper role for human decision making in this form of warfare? What will wars fought by competing swarms of drones mean for human casualties? Ethical conundrums abound, and yet, when war breaks out, these usually get subsumed in the drive for military superiority.
Over the longer term, the relentless advance of AI could lead to major changes in how the most powerful militaries equip themselves and deploy personnel. If combat drones are remotely controlled by human operators far away, or are entirely autonomous, what is the future of human-piloted fixed-wing aircraft? Having a human operator on board limits how long an aircraft can stay aloft, requires it to be big enough to carry at least one and often many humans, and demands complex systems to keep those humans alive and functioning. In 2021, a British company got an $8.7 million contract to provide explosive charges for the pilot-ejector seats—not the seats themselves, mind you—for some of the aircraft. The total cost to develop, install, and maintain the seat systems likely runs into nine figures. And the seats are just one small part of a very expensive plane.
A highly effective $35,000 AI-guided drone is a bargain by comparison. The fictional WOPR almost started a nuclear war, but real-life artificial-intelligence systems keep getting cheaper and more effective. AI warfare is here to stay.