The Socialist Case for Longtermism

“Longtermism” is often associated with billionaire philanthropy. But this idea in vogue among effective altruists is perfectly compatible with a socialist worldview.

As artificial intelligence (AI) systems and computing power become more widely available, the risks from malicious use increase. (Possessed Photography / Unsplash)

To state the obvious: vastly more people will live in the future than are alive today. Of course, there’s always the chance we’ll destroy ourselves within a matter of decades. But if we survive as long as the typical mammalian species, humans have around 700,000 years left, and the earth will remain habitable for hundreds of millions of years more.

As socialists we want to build a society that distributes power and resources from the ruling class to the masses of working people. But there’s another dimension of politics we sometimes overlook, the power over a different many: those not yet born.

These future people matter, a fact that we acknowledge readily, perhaps most often when discussing climate change. To borrow a metaphor from the late philosopher Derek Parfit, if I break a bottle in the woods and the glass cuts a young girl living one hundred years from now, am I blameless simply because she doesn’t exist yet? We should not discount future lives because they are separated from us in time — any more than we would discount people living on the other side of the world. This is a key insight of longtermism, called by Vox “effective altruism’s most controversial idea.” To me it jives perfectly with socialist common sense.

The Climate Threat

If we really believed that future people mattered just as much as those of us alive today, we would organize society very differently. In addition to more aggressively prioritizing action on climate change, we would spend much more of our resources on trying to prevent civilization-ending catastrophes and increasing the scope of our moral concern. Longtermism — the idea that positively influencing the long-term future is a key moral priority of our time — follows from these premises: that future people matter, that there will be many of them, and that our actions now can influence their lives.

Nowhere is this more obvious than in global warming. As documented in the recent longtermist book What We Owe the Future, almost all of the consequences of climate change will be borne by people who don’t yet exist. In the Intergovernmental Panel on Climate Change intermediate emissions scenario, in ten thousand years, sea levels are projected to rise by ten to twenty meters. Most of Hanoi, Shanghai, Kolkata, Tokyo, and New York would be buried beneath the waves.

We owe it to all the generations yet to come to get to net-zero emissions as soon as possible using all the available decarbonization tools we have, such as large investments in green technology through a Green New Deal and nuclear power. And because it will take between tens and hundreds of thousands of years for natural processes to restore CO2 concentrations to preindustrial levels, we have to go beyond net zero and actively remove carbon from the atmosphere.

While climate change poses a severe and unprecedented threat to humanity and will likely cause preventable suffering, displacement, and even death to millions of the world’s poorest people, it is unlikely to cause complete human extinction. This is not to minimize the need to address it — we need to use every weapon in our arsenal to mitigate this rolling tragedy — but only to distinguish it from another class of events that could cause complete human extinction. And thanks largely to the efforts of activists and governments around the world, we have made progress on emissions, making extreme warming scenarios significantly less likely.

Death From Above

While climate change is already here and an immediate threat, thankfully, the chances of other natural events wiping out the human species are extremely low. Toby Ord, for example, an Oxford researcher who wrote a book on existential risks called The Precipice, estimated the odds of existential catastrophe in the next hundred years for a range of events, with an existential catastrophe defined as an event that permanently destroys humanity’s long-term potential. He puts the existential risk from a natural event at one in ten thousand, which is almost entirely driven by the risk of a supervolcanic eruption. (Thanks to NASA’s investment in tracking “near-earth objects” we can have confidence in Ord’s one in one million estimate of the existential risk from an asteroid or comet impact.)

According to Ord, the much greater risks to humanity are self-inflicted. Climate change comes out to a one in one thousand chance — the same as that of nuclear war. These risks are unacceptably high — would you fly in a plane that had a one in one thousand chance of plunging from the sky? At that rate, there would be over 150 commercial airline crashes per day, rather than the twelve total in all of 2021.

Unfortunately, we also have to contend with risks from novel threats as well. By some measures, biotechnology has improved even faster than computers in the last few decades. We may not be far away from a world where a lone actor with a bit of bioengineering knowhow can concoct a hypertransmissible, perfectly lethal superbug in their garage.

The world was brought to a halt by a pandemic that, for all the suffering it’s caused, is weak sauce compared to the Black Death or even the Spanish flu of a century prior, let alone a disease engineered to maximize damage. Ord estimates the risk from an engineered pandemic at one in thirty, over three hundred times higher than his risk estimate for natural pandemics.

The Weird Robot Stuff

The final major anthropogenic risk may be the most speculative and controversial. Artificial intelligence (AI) is getting more and more powerful. DeepMind’s AlphaZero program began beating the best chess program in the world after four hours of playing itself. In that time, it ran nearly 20 million self-play games, illustrating the significant speed advantages afforded to machine minds. AlphaFold, also from DeepMind, “cracked one of biology’s grand challenges” when it demonstrated the ability to “predict the structure of proteins within the width of an atom.” Lab techniques to learn a protein’s shape can require hundreds of thousands of dollars and years of work. AlphaFold can learn a protein’s shape in a few days.

This April, two major advances in AI were announced. DALL-E 2, an image generation system from OpenAI, can generate or modify photo-realistic images from natural language text inputs, and PaLM, a language model from Google — essentially a souped-up chatbot — demonstrates understanding of complex natural language.

One common objection to fears of AI displacing humans is that while existing models can beat us in narrow domains (e.g., perfect information games like chess), they fail on more general tasks. But PaLM’s abilities are more general than those of past systems — the same model can solve math word problems, understand jokes, and even code. Its performance on thousands of grade-school word problems nearly matches the average of nine-to-twelve-year-olds. Most shocking to me was its accurate interpretation of jokes, which combine the distinctly human elements of world knowledge, logic, and wordplay.

These two announcements led to a twelve-year drop in the median estimate of when we will develop an artificial general intelligence (AGI) on Metaculus, a forum that averages community predictions (the median is now 2045). A survey of over seven hundred machine learning researchers conducted over the summer estimated that there was a 50 percent chance of AI systems that beat humans in every task by 2059, just thirty-seven years away. (A similar survey from 2016 estimated 2061).

Both DALL-E 2 and PaLM could have malicious immediate applications, like generating deep fakes and other misinformation. And the risks of greater harm rise in tandem with increases in capabilities. As AI systems and computing power become more widely available, the risks from malicious use increase.

In March, researchers published work demonstrating how an AI system used to identify potentially useful drugs was inverted to identify potential toxins. In six hours, it identified 40,000 potentially lethal molecules, including some compounds that scored higher for lethality than any known chemical weapons.

As these systems become more powerful, we will entrust them with more responsibilities, reaping benefits while introducing new dangers. As anyone who has used a computer knows, they don’t always behave as we would like them to. AI systems, especially once they are applied to more general problems, often do unexpected and inexplicable things. These errors are often amusing or frustrating but quickly become terrifying when AI is applied to situations with real stakes, like driving cars, executing financial trades, or controlling weapon systems.

The challenge of getting powerful AI systems to do what we want them to (i.e., the “alignment problem”) rises to the level of an existential risk when we consider the possibility of an artificial general intelligence (AGI). An AGI is a system that matches or exceeds human-level performance across any task we can do. This would include developing AI systems, leading many researchers to think that an AGI could recursively self-improve, rapidly scaling up its abilities. The system would now have superhuman levels of intelligence, perhaps enough to dramatically influence the world.

Humanity’s unusual capacity for intelligence was sufficient for us to subjugate the entire planet, dominating all other species. A system that far exceeds the intelligence of our smartest individuals or even that of humanity as a whole could use its wits to become the globally dominant power. If the system’s goals don’t align with ours, we could find ourselves at its mercy. We might be treated the way we treat most animals in nature — as an afterthought or curiosity. Or, much worse, we might be treated as instrumentally valuable, like the way we treat animals in factory farms. If you’re still not sold, think about how hard it would be for us to capture or kill every gorilla if humanity decided it was a significant priority.

Many of the world’s wealthiest and most powerful institutions are investing hundreds of billions into developing AI capabilities. And the organizations on the cutting edge explicitly want to create generally intelligent systems. Google’s blog post announcing PaLM ends with their vision  to, essentially, create an AGI.

Ord puts the risk of an existential catastrophe from AGI at one in ten — higher than all the other risks combined.

Our Moral Circle

Obviously these estimates are deeply uncertain, but there are very real reasons to take these risks seriously. We live in a radically different world than the one our great-grandparents were born into. Looking to the past gives us little information about how things will change in the future.

Like efforts to halt climate change, reductions to existential risks are global public goods — those which we all benefit from. These benefits are distributed widely, but the costs are borne acutely. Because of this dynamic, profit-seeking actors systematically neglect public goods. And sometimes profit seeking directly motivates risky actions, like how the promise of unprecedented riches inspires the rapid development of AGI.

Governments are best positioned to reduce existential risks, both because they can make investment decisions not bound to a short-term bottom line and because they have historically been the greatest sources of existential risks, through their development and use of nuclear and biological weapons. This makes it all the more important for us to build mass movements that can win power and pursue policies that protect us and the unborn masses.

Combating these risks will require global coordination unlike anything we’ve seen before. We need to build new institutions and recommit to existing ones that reduce existential risk, like the Intermediate-Range Nuclear Forces Treaty with Russia. We should go further and get nuclear-armed powers to commit to “no first use” pledges and work toward a world without nuclear weapons. (For more nuclear policy ideas, see the excellent book The Doomsday Machine by Pentagon Papers whistleblower Daniel Ellsberg). The Biological Weapons Convention receives less annual funding than the budget of a typical McDonald’s. We should change that.

It’s harder to know how to approach AI, given how far we still probably are from its development and how linked the risks and benefits are. An AGI aligned with our interests, broadly understood, could create a world of abundance for all.

Issues that socialists already care deeply about, like fighting climate change and averting a new cold war with China, are essential to the project of safeguarding a future for humanity. In addition to the immediate harms they cause, great power conflict and the destabilization caused by climate change increase other existential risks.

If we’re going to fight to ensure that a future exists, we should fight at least as hard to ensure that it is a future worth experiencing. This is the second key plank of longtermism: that we build a society where the interests of everyone are deeply considered. It’s nearly impossible to know what people one thousand years from now should do. But if we look backward, we can see lasting progress in the efforts to expand the circle of our moral concern and to create an economic system where such concern is rational.

The greatest atrocities in human history can be attributed, at least in part, to a failure of the perpetrators to think of their victims as worthy of moral consideration, from slavery to colonialism. But also because there was an economic motive behind oppression and exploitation.

A materialist vision of what compels actors and a goal of a world beyond capitalism is what separates socialists from longtermists. But philosophically, there’s no reason why we shouldn’t join them in including those not yet born in our circle of moral concern.