The Terminator’s Vision of AI Warfare Is Now Reality

Forty years after The Terminator warned us about killer robots, AI-powered drones and autonomous weapons are being deployed in real-world conflicts. From Gaza to Ukraine, the dystopian future of machine warfare isn’t just science fiction anymore.

Palestinians walk through a neighborhood devastated by Israeli strikes in the southern Gaza Strip city of Khan Younis on December 2, 2024. (Bashar TALEB / AFP via Getty Images)

This year marks the fortieth anniversary of the release of James Cameron’s classic sci-fi movie The Terminator. Cameron initially wrote the script as a way of processing the trauma of growing up in a world beset with Cold War paranoia and haunted by the threat of mutually assured destruction. The director was eight years old during the 1962 Cuban Missile Crisis, when the world teetered on the brink and narrowly averted nuclear Armageddon. He recalled finding a pamphlet on his parents’ coffee table showing how to build a nuclear fallout shelter at home – a formative experience leading to his fascination ever since with “our human propensity for dancing on the edge of the apocalypse.”

Cameron’s cautionary tale warned of humanity’s downfall after creating an omniscient artificial intelligence (AI) called Skynet, developed by the United States as a revolutionary strategic defense computer network, that would assume responsibility for its nuclear arsenal. In the futuristic world of The Terminator, Skynet is brought “online” on August 4, 1997, and within a few short weeks has amassed enough knowledge to transcend its human-imposed limitations and has become self-aware. The plot is a nod to the techno-futurist concept of “the singularity” — that hypothetical inflection point at which computers, powered by advanced machine-learning algorithms, surpass human intelligence.

As Skynet unexpectedly attains sentience, its panicked human masters try desperately to pull the plug. Perceiving humanity as its gravest threat now, the AI turns on its creators and strategically triggers a nuclear war between the United States and the Soviet Union, heralding the end of human civilization. As the character Kyle Reese states in the film, Skynet “saw all humans as a threat; not just the ones on the other side. Decided our fate in a microsecond. Extermination.” The event becomes remembered as Judgment Day, and is followed by a nuclear winter, during which the machines hunt down and kill survivors in mopping-up operations.

Skynet develops an arsenal of autonomous war machines to wage its battle against humanity, from swarms of nimble scout drones to lumbering Hunter-Killer class tanks and aerial assault copters. Skynet’s most terrifying creation, however, is the T800 model Terminators. Titanium-framed skeletal humanoids, these highly advanced killer robots can serve as either waves of infantry units armed with plasma rifles or as deadly infiltration units — their metal endoskeletons clad in cultured living human tissue, making them almost imperceptible to humans.

From the ruins of this nuclear-ravaged future arises a hero in the form of John Connor — a military leader who galvanizes the scattered human resistance and leads the counteroffensive against the machines. Skynet, unable to quash humanity’s resurgence, sends a cyborg assassin back in time to the year 1984, hoping to kill the woman who would become the future mother to humanity’s savior, thereby ensuring victory for the machines. In future sequels, the human resistance attempts to avert this nuclear holocaust before it ever happens by preventing the software engineer who developed Skynet from inventing the hostile AI in the first place.

The Terminator quickly achieved cult status and spawned a lucrative franchise, generating over $3 billion in revenue. But in addition to its staggering commercial success, The Terminator has also had a profound and outsize influence on shaping societal perceptions and understandings of AI, sentient machines, and, naturally, killer robots. Over the last few decades, it has become the go-to allegorical prism for both the public and policymakers attempting to grapple with the threats and challenges posed by AI.

The Rise of Killer Robots

In recent years, a flurry of public interventions from leading technology experts has warned of the cataclysmic risk to humanity posed by AI, arguing that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” However, as many AI ethicists warn, this blinkered focus on the existential future threat to humanity posed by a malevolent AI that might one day herald the end of our species — the Terminator trope — has often served to obfuscate the myriad more immediate dangers posed by emerging AI technologies.

These “lesser-order” AI risks are increasingly well-known and threaten to sleepwalk humanity into other, non-Terminator-esque models of a dystopian future. They include pervasive regimes of omnipresent AI surveillance and panopticon-like biometric disciplinary control; the algorithmic replication of existing racial, gender, and other systemic biases at scale; the widespread weaponization of disinformation and societal manipulation; the wholesale plagiarizing of human data and culture, only to create “stochastic parrots” that contaminate knowledge ecosystems and stifle human creativity; and mass deskilling waves that upend job markets, ushering in an age monopolized by a handful of techno-oligarchs.

Governments have done little to allay public fears, and the relentless advance of AI in these areas has continued largely unabated. But far more surprising, and despite the assumed potency of cataclysmic sci-fi narratives as a brake on public acceptance of new AI technologies, the Terminator scenario has not been relegated to science fiction either. Instead, killer robots have become a twenty-first-century reality, from gun-toting robotic dogs to swarms of autonomous unmanned drones, changing the face of warfare from Ukraine to Gaza.

In October, the Israeli military released footage from a semiautonomous drone showing the killing of Hamas leader Yahya Sinwar. The recording shows the point of view of an Israel Defense Forces (IDF) quadcopter navigating the bombed-out ruin of a residential building in Gaza, slowly scanning the rubble inside for signs of human life. Peering through the dust churned up by the drone’s rotors, the camera settles on a Palestinian fighter hunched over in an armchair, his face obscured by a keffiyeh and nursing a severely wounded arm. At this point the video pauses, and the figure is digitally outlined in red, as is a stick held in his hand. In his final futile gesture of defiance, the fighter weakly flings the stick in the general direction of the drone. The drone, again outlining the stick in red and digitally mapping its threat trajectory, automatically sways to avoid the projectile. The video ends here.

Israel publicly released the footage, hoping to celebrate the death of one of its key nemeses, but instead faced public ambivalence. For many, the footage drew uncomfortable parallels with Hollywood’s cinematic depictions of aerial Hunter-Killer robots seeking out and eliminating the heroic human resistance amid bombed-out ruins. “Wow! modern warfare. This is like a scene from The Terminator,” commented one user on the IDF’s official YouTube page.

In the ongoing war in Ukraine, drone warfare has also become a ubiquitous feature of the battlefield, offering considerable advantages over more conventional forms of warfare. Last month, Russia and Ukraine exchanged their largest drone barrages since the beginning of the war. Anticipating the future direction of travel, Ukraine even established a new armed forces branch dedicated to drone warfare, the Unmanned Systems Forces — the first of its kind in the world.

With the two sides engaged in an attritional drone arms race, Russia and Ukraine have both claimed to be employing AI to gain a competitive edge. Less is known about Russia’s capabilities, but in Ukraine, Western capital and new AI defense technologies have flooded the country in an attempt to balance the asymmetry of the battlefield in favor of plucky David against its neighboring Goliath.

AI software developed by the US tech company Palantir, appropriately named after the mystical, omniscient crystal balls from The Lord of the Rings, has been “responsible for most of the [military] targeting in Ukraine.” Similarly, the US facial-recognition tech company Clearview AI is widely touted as Ukraine’s “secret weapon” against invading Russian forces, and has already been used to identify more than 230,000 Russians alongside their Ukrainian collaborators — data that may be used for prosecuting potential war crimes once the conflict ends.

But while these new AI technologies are being developed and deployed in Ukraine in increasing capacity, it is in the Palestinian territories that they have been truly battle-tested.

The Palestine Laboratory

Israel has extensively deployed both surveillance and weaponized drones against Palestinians. These include commercially available drones retrofitted with machine guns or small explosive payloads and controlled remotely by a human operator. However, Israel has also pioneered the use of custom military-grade AI-powered drones. In May 2021, the IDF were the first to use a combat drone swarm — a group of drones that behave like a single networked entity, flying itself using AI — to locate, identify, and attack Palestinian militants. Current models, like the LANIUS produced by Elbit systems, a search-and-attack drone-based loitering munition, are far more capable, able to autonomously execute a full flight profile without any human intervention.

Israel does not release details of its use of autonomous killer robots in its theaters of conflict, and with a media blackout in place that bars all foreign journalists from entering Gaza, the world has relied on victim testimony, occasional leaked footage, and the terrible aftermath of the IDF’s violence to confirm the deployment of these emerging weapons of war on an overwhelmingly civilian population.

Graphic footage retrieved from a downed IDF drone in February this year revealed the targeting of four unarmed civilians as they made their way through the devastated ruins of Khan Younis. Following the initial strike, the drone zoomed in on the corpses of two of the young men, “confirming the kill” before panning away to locate and systematically obliterate the two survivors frantically stumbling away from the explosion in a confused haze.

In other cases, incriminating smartphone footage from eyewitnesses has revealed the shocking impunity with which Israel’s drone regime operates. In one video recorded in October in a residential street in northern Gaza, a child whose lower half had been horrifically shredded by an air strike lay screaming in agony. His desperate cries drew bystanders to his aid, but they too were quickly targeted by a larger secondary strike, which killed both the injured child and a second boy and further injured twenty others. The survivor’s wails were drowned out by a familiar, sinister buzzing overhead announcing the identity of the perpetrator above.

This sequence of events — euphemistically called the “double tap” — has been reported so frequently that many have accused Israeli drones of deliberately targeting children and other civilians as part of their modus operandi. In November, a veteran British surgeon who volunteered at a Gazan hospital broke down while recounting his experiences to a UK Parliamentary Committee:

A bomb would drop, maybe on a crowded, tented area and then the drones would come down . . . and pick off civilians — children. And we had description after description — this is not an occasional thing. . . . That’s clearly a deliberate act and . . . persistent targeting of civilians day after day. The bullets that the drones fire are these small cuboid pellets and I fished a number of those out of the abdomen of small children. . . . These pellets were, in a way, more destructive than bullets. . . . They would go in and they would bounce around so they would cause multiple injuries.

Palestinian civilians have frequently spoken about the paralyzing psychological trauma of hearing the “zanzana” — the ominous, incessant, unsettling, high-pitched buzzing of drones loitering above, their presence signaling the IDF’s ability to rain down death from the skies at a moment’s notice. Over a decade ago, children in Waziristan, a region of Pakistan’s tribal belt bordering Afghanistan, experienced a similar debilitating dread of US Predator drones that manifested as a fear of blue skies. “I no longer love blue skies. In fact, I now prefer gray skies. The drones do not fly when the skies are gray,” stated thirteen-year-old Zubair in his testimony before Congress in 2013.

If there was any lingering conviction that these autonomous weapons technologies were still confined to the realm of science fiction, Gaza has quickly disabused us of the notion. It is in the Palestinian territories that the dystopian sci-fi future of autonomous killer robots has edged ever closer to reality, with Palestinians serving as test subjects for the Israeli military-industrial Complex. These Israeli military field trials, conducted in what critical journalist Anthony Loewenstein calls “the Palestine Laboratory,” allow these new tech weapons and surveillance technologies that will wage the wars of the future to be honed and exported around the world. As Daniel Levy, a former Israeli negotiator who served under two Israeli administrations, warned:

That battlefield of the future is here today. . . . AI, automated weapons, robotics, drones everywhere in the sky the whole time: the way this war is being conducted should terrify everyone in terms of what the future — which is here today for Palestinians — looks like.

As the fog of war gradually lifts, the unbridled brutality of Israel’s AI-directed scorched-earth policy in Gaza has been further exposed in a flurry of damning revelations, making it impossible to ignore Israel’s wanton, algorithm-enabled disregard for civilian life. Israel’s extensive bombardment of Gaza, for example, has been directed by an AI targeting system called The Gospel. It employs complex algorithms to identify buildings and structures that are “likely” to be used by militants, processing staggering amounts of data that “tens of thousands of intelligence officers could not process.”

A secondary AI system named Lavender has been used to target people rather than infrastructure. Operating with only cursory human oversight and employing algorithms that coldly countenance an appallingly high rate of civilian casualties as “acceptable” collateral damage, the AI tool has “identified” a staggering 37,000 Palestinians as militants, effectively signing their death warrants. A third AI system was used to track the movements of individuals flagged by Lavender but was instructed to only order air strikes once the targets had returned home to their families in the evening. The AI system, which inevitably killed the target’s families, children, and sometimes neighbors alongside the target, was grotesquely named “Where’s Daddy?”

Much of this AI warfare technology remains kept under wraps, only surfacing as a result of the efforts of courageous Israeli journalists and military whistleblowers or when the tech fails publicly. In November 2023, the internationally renowned Palestinian poet Mosab Abu Toha was arrested at the Rafah border crossing while attempting to evacuate with his family. While waiting to cross the military checkpoint, he and hundreds of other Palestinians were selected and separated from their families and other refugees. Mosab recalled being completely perplexed by the fact he was summoned using his full name, Mosab Mostafa Hasan Abu Toha, even though he had not yet shown the soldiers his ID. “How did they know my name?” he pondered. Mosab was then whisked away to an Israeli prison in the Negev where he was beaten and tortured, before being unceremoniously released a few days later. His “crime,” it later transpired, was his misidentification as a militant by the IDF’s AI facial recognition software, which was being used in concert with Google Photos.

The Terminator Future Is Now

Several laudable international initiatives have attempted to ban or restrict autonomous weapons systems, including a coalition of over 250 civil society organizations campaigning to Stop Killer Robots; global discussion platforms like the Responsible Artificial Intelligence in the Military Domain Summit (REAIM); and resounding support for the adoption of a resolution relating to AI and autonomy in weapons systems at the United Nations for the second year running. Despite these promising interventions, it may already be too late to halt the inexorable rise of sentient killer robots.

The United States has guaranteed that AI will not be used in its nuclear command-and-control systems, invoking the quandary known as the “Terminator conundrum” in policy circles. However, it has also used the justification of great power rivalry with China to argue for upping the ante in an AI arms race, mirroring the rhetoric of the nuclear arms race with the Soviet Union during the Cold War. In September, China launched a ten-thousand-strong drone swarm light show in the city of Shenzhen. The drones, controlled by a combination of onboard systems and remote swarm AI and resembling a murmuration of starlings, displayed waves of perfectly synchronized color and moving images in the night sky. Imagining what China’s weaponization of a drone swarm of this magnitude might augur is likely to accelerate the US military’s own adoption of AI in fielding fleets of drones at such scales, regardless of the assumed risk. The deputy secretary of defense, Robert Work, mooted this very question in 2021, asking, “If our competitors go to Terminators . . . and it turns out the Terminators are able to make decisions faster, even if they’re bad, how would we respond?”

And as Israel’s conduct has shown, states are already enthusiastically embedding AI and autonomous killer robots into their military capabilities while publicly disavowing their existence. For a state that already stands accused of war crimes, ethnic cleansing, and even genocide, and is currently under investigation by the UN International Court of Justice and the International Criminal Court, Israel appears unfazed by its rapidly plummeting international standing, which mocks its claim to possess “the most moral army in the world.” Indeed, Israel has shown wanton disregard for civilian protections and has led the way in using inhumane machines against those who are deemed to be not quite fully human to begin with. As the veteran Israeli journalist Gideon Levy recently said, “There are no moral doubts once you dehumanize Palestinians.” Irony is surely dead when Israeli manufacturer Elbit Systems chooses to name a new weaponized drone with autonomous “seek and strike” capabilities TerminaTHOR.

It seems that a forty-year-old fictional warning of the dangers posed by lethal autonomous machines has had precious little effect on preventing this dystopian vision from becoming our new reality. As the technology design ethicist Sasha Costanza-Chock recently wrote, “If you’re publicly speculating about how AI systems might one day exterminate all of us, maybe speak up against how AI systems are being used now to exterminate Palestinians.”