Talk of the apocalypse is everywhere. The twin threats of environmental collapse and artificial intelligence (AI) have ballooned in the public imagination. The prognosis feels grim. As even Barack Obama pointed out long ago, the complexity of climate change exploits the weakest point in the international system: its inability to coordinate planning in the face of crisis. AI presents a similar challenge: its already vast global digital infrastructure makes regulation a seemingly Sisyphean task. These types of threats have been dubbed “existential risks” by philosophers like Nick Bostrom and William MacAskill — problems that could lead to human extinction or the irrecoverable collapse of civilization. Disaster feels more palpable than any time since the Cold War.
But it is hard to disentangle this new doomer culture from the real risk of an extinction-level event. Between current AI technologies and a Skynet-style extinction scenario lies a whole science fiction novel — or maybe even a series. Fears of “superintelligence” or “machine agency” currently have little reality to them. But this does not mean that what we do with AI is not risky. Plugging our algorithms into finance, public policy, and the distribution of goods around the networked earth contributes directly to the rising heat at the root of the environmental crisis, all while creating potential instability on the lowest rung of the Maslow hierarchy.
Take the example of SpaceX, which controls more than half the satellites in low orbit around our planet. The US government has effectively allowed military strategy in Ukraine — which relies on the company’s internet service for communication and crucial wartime operations — to become heavily dictated by the whims of a mercurial CEO.
Recent deep dives into Elon Musk’s personal and professional life should leave us disturbed about such an unprecedented concentration of power in the hands of a private citizen. Yet as shocking as recent coverage of the Tesla titan has been, these exposé-style profiles have tended to make a crucial error. By focusing on Musk’s virulent megalomania — particularly his desire to play Batman, using his companies to “innovate” solutions to the conflicts and threats that humanity faces today — the media has generally overlooked how Big Tech billionaires like Musk amplify the very risks they ostensibly aim to mitigate. Billionaires are the existential risk. Addressing the climate catastrophe and the AI panic requires eliminating the category “billionaire” altogether.
We often hear that AI and cryptocurrencies use massive amounts of electricity, and so contribute to global warming. We also hear that climate is too complex to understand intuitively, and therefore we must rely on data — but our reliance on digital channels raises the risk of AI-abetted misinformation. Skeptical voices on both sides have failed to address the layer of the problem that really connects the two areas: capitalism.
The new digital infrastructures and the climate crisis are inextricably linked. Both are the bastard children of planetary-scale neoliberalism, a cancerous economic logic that now threatens to gobble up its natural resource base. Criticism of AI tends to focus on either its bias and potential harm or on its global immiseration of labor and extractivist mining practices. The form of capitalism we currently live in is rarely mentioned, or is taken for granted. Radical environmentalism gave way to green capitalism and the billionaires that stand to profit from it. It’s no coincidence that these bourgeois modes of critique focus on technology and miss capital, the elephant in the room.
Environmental catastrophe and metastasizing digital systems are mutually reinforcing — two crises locked in a death spiral. Considering AI and climate change as a unit is the only way out of the impasses created by current attempts to address both. Billionaires are an existential risk because they are the node linking the two disasters.
The representatives of capital know that something is wrong. When Elon Musk bought Twitter, text messages between the billionaire and MacAskill revealed that he thought Twitter was the “future of human civilization,” so that not preserving it would be its own kind of existential risk. Despite Musk’s obsession with preventing human extinction, his actions since buying the platform have revealed that he is a threat to humanity, and his satellite network only reinforces that point. Climate change and AI are risks because both the power to increase them and mitigate them are concentrated in too few hands.
In late May, the Center for AI Safety released a twenty-two-word statement endorsed by leading tech CEOs, AI researchers, and engineers. The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” It was the second such public statement this year, following a splashy open letter released by an existential risk think tank, the Future of Life institute, which demanded a six-month pause in AI research. Public reactions to these warnings primarily focused on whether or not the concerns they raise about human extinction are reasonable, or a deliberately hyperbolic publicity stunt cynically designed to drive investments in AI research by making the technology appear more dangerous — and thus more advanced — than it actually is. This is wrong.
We should accept that AI is an existential risk, just as we all know today that climate threatens humanity at large. But AI is not a direct risk — it is a risk packaged in a long-term collective fumbling of the bag that has led to a capitalism that would be unrecognizable to any serious economist of past generations.
This is an unprecedented moment in which corporations are not only desperately begging for government regulation of their own industry — they are tacitly admitting that they are helpless before the groaning engine of capital. The letters’ brilliant and/or fabulously wealthy signatories confess to being compelled by market forces to continue developing a technology that they worry may already be outside of their control, irrespective of the risks to the human species. The billionaires and their hangers-on admit that the real existential risk to the human species is not AI itself, but a hypercharged techno-capitalism that makes dangerous research too seductive to pass up, even when those doing the research desperately wish that someone would stop them.
Capital is eating its own tail, and humans are going to be chewed up along the way if we do not reverse course. Where Karl Marx saw the subjection of labor to capital, we are witnessing capital’s invasion of its historic seat of power, the corporation itself. Industry leaders and even billionaires themselves know this is not sustainable, but their “recognition” of it in public comes in a mystified form that passes off Terminator fantasies and geoengineering solutions as responses.
Let’s not get it confused: AI and climate are existential risks. But the risk part of existential risk comes from the social formation we have allowed to develop, not from technology itself. We have seen calls over the course of our young century to consider whether it is moral to allow billionaires to exist. But the real question is whether our species can survive the billionaire.