Artificial Intelligence Is Already Making War More Horrific
AI-assisted warfare extends a logic with roots in the industrial warfare of the 20th century: a cold distance that turns humans into points in a dataset.

Surveillance via a drone equipped with artificial intelligence. (Niharika Kulkarni / AFP via Getty Images)
The United States is using artificial intelligence in its war with Iran. The military says the “variety” of AI systems in use is dedicated to sorting data, deployed as tools and not as agents. The chief of America’s Central Command, Brad Cooper, says AI systems assist the armed forces, allowing them to “sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react.”
AI will speed up the pace of target acquisition and firing, and thus the pace of war, death, destruction, and whatever comes in the aftermath. Cooper insists that humans make the final call. That’s less reassuring than it’s meant to be. A recent report notes that “the targets for Operation Epic Fury were identified with the aid of the National Geospatial-Intelligence Agency’s Maven Smart System, which folds in data from surveillance and intelligence, among other data points, and can lay out the information on a dashboard to support officials in their decision-making.”
Nevertheless, we’re told that AI tools don’t “explicitly create” targets; they merely “identify potential points of interest for military intelligence.” This is a bit like saying that information doesn’t impact decisions — as if intelligence placed before a commander has nothing to do with where a strike is ordered.
If the February 28 bombing of Shajarah Tayyebeh elementary school in Iran was the result of an AI tool “folding in” old military intelligence to Maven’s dashboard, it means that AI effectively shaped the ostensibly human “final call.” Sure, Cooper’s soothing reassurance that human military personnel are supposedly still in the cockpit may well be true. Nevertheless, adding AI into the matrix of targeting systems and acting on its recommendations amounts to AI military decision-making, however much humans may still pull the trigger.
When War Hit the Small Screen
The deployment of AI systems in the latest American war may remind some of the Gulf War of 1990–91. That conflict perhaps won’t be considered in history books as a major affair in itself, but it was a war we watched on television in real time, screens lit up in green and punctuated with sudden bursts of light. It featured prominently on CNN round-the-clock coverage. In the early 1990s, new technologies and ways of doing business in both warfare and telecommunications meant that something had changed.
War had turned into a more distant and dehumanized affair both in its execution, with cruise missiles from hundreds of miles away, and its consumption, with the whole thing shown to the public almost as if it were footage from a video game. One got the sense there was no going back and that whatever else this was, it was a trip down a road that offered no chance for a U-turn.
In Age of Extremes, historian Eric Hobsbawm warned that in the twentieth century, modern war technologies and the bureaucratic systems that underwrote conflict at scale had fundamentally changed warfare, enabling a horrific total war that could not previously exist without the power of distance. While the purpose of distance in war is to leverage a strategic and tactical value that effectively amounts to cover — or better position — and surprise, the ultimate effect is separation.
If violence, even mass violence, can be action at a distance with the actor removed from the immediate, visceral, and corporeal consequences of their deeds, then violence becomes impersonal and unreal, even virtual — like playing a video game. Push the trigger button, tip the joystick upward, and move on with your day as the pixels on the screen vanish. Home by dinner, in time for a few rounds of Call of Duty.
Semper Fi, Terminator
Today AI in warfare is used to sort through information rapidly and assist humans in acquiring targets. Tomorrow it may be used otherwise in ways we can’t fathom today or tend to dismiss as a secondary or tertiary threat — more a paranoid fantasy drawn from The Terminator than a real and present danger. The immediate threat amounts to the same thing either way: humans dehumanizing themselves.
The tools of the trade are machines we use to get better at killing, the sort that alleviates us of the hassle that has plagued those who commit violence throughout human history: you had to be up close to do it, near enough to watch the lights go out. AI is thus not just a tool for sorting but also a tool for putting literal and figurative distance between the operator and the damned.
If the last century brought us the capacity to push a button to drop a bomb, this one will allow us to push a button to have a computer tell us where to drop it too. You can’t get much more removed from destruction than that.
The shift is horrifying and terrifying in equal measures, but it’s less a new way of doing things than it is the next logical step toward fully digitized and dehumanized destruction of the sort that the perspicacious saw coming decades ago. Hobsbawm saw the transformation of warfare in the twentieth century as a “new impersonality” of the sort that “turned killing and maiming into the remote consequence of pushing a button or moving a lever” and “made its victims invisible as people eviscerated by bayonets, or seen through the sights of firearms could not be.” AI doesn’t change the logic or effect of impersonal warfare but rather reinforces the former and amplifies the latter.
Dr AI Strangelove
The consequences of another level of removal will be unimaginably grim. Or maybe we can imagine them only too well. Writing of World War II, Hobsbawm points out that with bomber planes flying above, those below “were not people about to be burned and eviscerated, but targets.” The impersonal nature of the distance meant that “mild young men, who would certainly not have wished to plunge a bayonet in the belly of any pregnant village girl, could far more easily drop high explosive on London or Berlin, or nuclear bombs on Nagasaki.”
That commercial AI chatbots have been found to choose nuclear war nearly ten times out of ten in “crisis situations” makes headlines because we think of what a robot overlord might do to — or for — us in extremis, absent the hero of WarGames talking the machine out of it. Nearer to home, the immediate and growing threat isn’t the machines but, as always, us — and what we use the machines to do or absolve ourselves of doing.
The distance AI puts between the human mind and the decision to destroy ought to be what terrifies us above all. There may be no limit to the horrors that ensue. Whatever else it may be, human history is also the history of using technology to destroy one another. Today we are past masters of the art — not only in the ruthless efficiency of physical erasure but in the ways we make that destruction easier to direct, easier to justify, and easier to live with before, during, and after the fact.
As Hobsbawm warned of the “short” century that ran from 1914 to 1991: “The greatest cruelties of our century have been the impersonal cruelties of remote decision, of system and routine, especially when they could be justified as regrettable operational necessities.” Hobsbawm was right in identifying the cruelty of the short century’s industrial slaughter. The question for us is what that slaughter will look like when that cruelty is further intensified by the new distance resulting from AI-mediated decision-making.