Tech Capitalists Don’t Care About Humans. Literally.
It’s not just a figure of speech to say tech titans are indifferent to humanity. From Peter Thiel to Elon Musk, many are adherents of a worldview that envisions humans being replaced by digital post-humans and sees this as progress.

Elon Musk sees himself as a messianic figure who is going to play a pivotal role not just in human history but in cosmic history. This isn’t just private delusion, but the result of a concrete ideology he shares with other tech barons. (Marvin Joseph / The Washington Post via Getty Images)
- Interview by
- Doug Henwood
Watching tech moguls throw caution to the wind in the AI arms race or equivocate on whether humanity ought to continue, it’s natural to wonder whether they care about human lives.
The earnest, in-depth answer to this question is just as bleak as the glib one. As moral philosopher Émile Torres argues, many Silicon Valley leaders embrace a vision of a transhumanist future in which biological humans will be replaced by digital beings endowed with superintelligence. This vision helps explain their obsession with artificial general intelligence (AGI) and sits at the core of what Torres describes as human extinctionist preferences.
In 2023, Torres and his colleague Timnit Gebru coined the acronym TESCREAL to describe a constellation of ideologies — Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism — that have become highly influential within Silicon Valley. Torres is a philosopher, intellectual historian, and journalist whose work focuses on the ethics of emerging technologies, particularly AI and human extinction.
In this conversation with journalist Doug Henwood recorded for the Jacobin podcast Behind the News, Torres explains the TESCREAL worldview, its connections to eugenics and IQ realism, and why figures like Elon Musk, Peter Thiel, and Sam Altman embrace visions of a post-human future.
TESCREAL is a mouthful. What does it mean?
I’ve never claimed that it’s a pretty word. It’s not. But it is useful.
The key idea at the very heart of the TESCREAL worldview is this techno-utopian vision of the future whereby we develop advanced technologies that enable us to radically reengineer the human organism to create a new post-human species — so-called transhumanism. We will spread beyond Earth, colonize Mars and then the rest of the galaxy, and ultimately create vast computer simulations full of trillions and trillions of digital people.
And the whole reason for doing that is that the TESCREAL worldview instructs us to maximize the total amount of value within the universe. People are the containers or the substrates of value. So the more people you have, the greater total amount of value you could potentially create.
This value, what is it exactly? And who is enjoying it?
This is a really excellent question. TESCREALism has been significantly influenced by an ethical theory called “totalist utilitarianism.” Totalist utilitarianism takes value to be an impersonal thing. Value doesn’t matter because it benefits some particular person. The idea is that the universe itself is just better if there’s more value in it.
If we think of it in terms of money or wealth, it’s like saying that the more money or wealth there is, the better. It doesn’t have to be money. Another interpretation of value would be that it is something like pleasure or happiness. So the more happiness or pleasurable experiences the universe contains, the better it becomes — not necessarily for anyone, but merely from what total utilitarians refer to as the point of view of the cosmos. There’s this disembodied cosmic eye looking down on the universe, and it judges the universe to be better the more value it contains, and it judges the universe to be worse the less it contains.
There’s a weirdly religious angle to that.
There is. And I would say that there’s also a kind of capitalist influence, the idea that human beings do not matter in and of ourselves. In this worldview, we matter for the sake of value, rather than value mattering for the sake of us.
This is turning me into a really sentimental humanist. I like to think of myself as a little more disciplined than that, but God, it’s gonna make me all soppy about the value of humans.
It’s literally called impersonalism, as in the impersonalist understanding of value. Again, the key idea is that we are just means to an end. The only end is value, this abstract yet quantifiable concept that should be maximized to the physical cosmic limits. We matter only as the conduits through which this value can come into existence. That idea is integral to the TESCREAL movement.
Elon Musk said recently in a recent X post that humans are a “biological bootloader for digital superintelligence.” Is this what you’re talking about? Humans will create this fabulous intelligence, but it will surpass us and leave us totally in the dust.
Yes, exactly. There are a couple of ways to think about this. One is that superintelligence could be really important because it could facilitate the realization of the TESCREAL vision of the future. TESCREALists tend to think of everything as an engineering problem. So the reasoning is that if we have a superintelligence, then we have a super-engineer. If we have a super-engineer, then we can engineer paradise, which is a term that they literally use.
Superintelligence will figure out how to colonize space, as Sam Altman, the CEO of OpenAI, has said. Without artificial general intelligence, or AGI, we probably cannot colonize space. Superintelligence is going to figure out how to “cure” aging so that we get to live forever or upload our minds to computers. But the other key role that superintelligence might play is that it could multiply into a population that simply replaces humanity.
Maybe our species is, in a sense, not worthy to fulfill the long-term vision of TESCREALism. And what we need is some population of “smarter” beings that can actually go out and colonize space and reengineer or redesign galaxies in order to harvest all of the energy contained in them to maximize value.
From my limited meatspace point of view, it seems that these creatures of the future would be something like simulations?
That is one possibility. So once you have digital intelligences, there are two options. One is that they could live in virtual reality worlds. This is what I was hinting at earlier when I talked about building these vast computer simulations full of trillions and trillions of digital people. They would be digital entities, AGIs or superintelligences, that just live in a virtual reality world, a simulated universe.
Another possibility is that these AI systems occupy something like android bodies or mechanical bodies that enable them to navigate our physical universe. So they’re still digital, but rather than existing within simulated worlds, they could exist within our actual world.
The real vision is that we have both. So we’d have superintelligences that go out and colonize space, and they would also build — and I’m quoting some leading figures within the TESCREAL movement here — “planet-sized computers” on which to run these virtual reality worlds where additional AIs would just be simulated beings in a simulated universe.
The transhumanist imperative to radically reengineer humanity is really at the heart of all of the other claims that TESCREALists make. We’re probably not going to be able to colonize space if we don’t radically reengineer ourselves or upload our minds to computers or just create a new species of AGI to completely replace humanity. Does that make sense?
Maybe I need to take some more drugs before I make any sense of this. Okay, so we talked a bit about the T, E, S, and C parts of the acronym. Let’s move on to the R, rationalism. What does that mean in this world?
Rationalism was founded by a very noted figure within the modern transhumanist movement, Eliezer Yudkowsky. The idea is that we need to take a step back and think about the best ways to optimize our rationality. The more optimally rational we are, the better positioned we are to create these post-human successors who will go out and colonize space. Rationalism is all about identifying cognitive biases in order to neutralize those biases. It’s about developing new theories in a field called decision theory, which is supposed to enable us to make decisions in the most rational way possible.
Most of us want to be more rational, and we don’t want to succumb to the distorting effects of cognitive biases. But when you start to look at the details of what rationalism is all about, it becomes really problematic. I’ll give you a brief example.
The founder, Yudkowsky, posted an article on his community blogging website, Less Wrong, which is the online epicenter of the whole rationalist community, in which he asked readers to imagine a situation in which they were forced to choose between two options. One is that a single individual is tortured mercilessly for fifty years. The other is that some unfathomable number of people suffer the almost imperceptible discomfort of having a speck of dust briefly in their eye. So which of these is worse? His argument was that the second, the dust speck scenario, is much worse because if you do the math. . .
This is insane material.
Yes, a guffaw is the appropriate response there. His slogan is, “Shut up and multiply,” meaning forget about your emotions, your moral feelings. Use your head, do the math, and then you’ll see that the second scenario is much worse than somebody being tortured for fifty years. Therefore, you should choose the torture option.
Another aspect of this worldview is a worship of high IQ, which gets us into the territory of eugenics. Can you talk about the IQ and eugenics angles?
IQ is very important to a lot of people in the TESCREAL movement. Many of them are, as they would put it, “IQ realists.” They think IQ measures something real about the human mind, and that this thing is very important.
A lot of psychologists, philosophers, and so on think IQ is complete nonsense. There was a really good critique of the idea of IQ by a statistician named Nassim Taleb. He points out that IQ tests can potentially be useful for identifying individuals who would score very low on IQ tests, but when it comes to people who are above average, it’s basically meaningless, because there are a million different ways to be “intelligent.” There are some people who are great at math, some people who have amazing common sense and wisdom, and some who are brilliant scientists but have no common sense and no wisdom to share.
I would liken the idea of intelligence to that of skill. If somebody says, “You should meet my friend Joe, he’s really skilled,” I’m wondering, “In what sense? In what way?” I need more information. Can he write a string quartet? Can he cook a really nice dinner? Can he build a good cabinet?
Historically, IQ was intimately bound up with eugenics. The tests were developed by leading eugenicists, many of whom had deeply racist and sexist views. They believed that the “white race” had superior intelligence relative to other racial categories, and they developed the test to confirm exactly what they wanted to show.
There is a very clear continuity between the eugenics movement of the twentieth century and the emergence of the TESCREAL movement. Leading figures within the TESCREAL movement have sounded the alarm about the possibility of dysgenic pressures, “dysgenic” meaning the opposite of eugenic, where eugenic means literally “good birth.” The idea is that intelligence is heritable, which is highly questionable from a scientific perspective. Dysgenics would occur if, for example, individuals who are “less intelligent” outbreed their “more intelligent” peers, with the result that the prevalence of low IQ people within the population will increase across generations.
People in the TESCREAL movement have explicitly worried about this. And this is an idea that the eugenicists of the twentieth century were completely freaked out about too. What if the non-white population of the United States grows in size and outbreeds the white population? Well, that could be really bad according to this view. It could cause dysgenic pressures, because according to these racist IQ realists, non-white populations have lower IQs. This led to an old-school eugenics program that involved banning interracial marriages or preventing immigration.
Transhumanism is uncontroversially a form of eugenics. In fact, transhumanism was initially introduced by some leading eugenicists of the twentieth century, such as Julian Huxley, a biologist who published several books on this topic. Transhumanism is eugenics on steroids. The old-school eugenicists just wanted to improve the human species. Transhumanism says: Why stop at perfecting humanity? Why not create an entirely new and “superior” post-human species?
This would all be a quirky mode of thought, except for the fact that people like Elon Musk, Peter Thiel, Sam Altman, and Marc Andreessen are part of this club to varying degrees. So how does this mode of thought relate to these Silicon Valley machers?
Marc Andreessen actually had the descriptor “TESCREALIST” in his Twitter bio back in late 2022 or 2023. And Elon Musk has said that longtermism is “a close match for my philosophy.”
And his “colonizing Mars” thing is totally part of this too.
Absolutely. Yes. Musk sees himself as a messianic figure who is going to play a pivotal role not just in human history but in cosmic history. Because if he is the one who ushers in the age of AGI with his company xAI, then that’s going to be a monumental shift. It would, in his view, introduce an entirely new capability for us to then realize the techno-utopian vision of the TESCREAL worldview.
SpaceX factors into this too. For Musk, Mars is the stepping stone to the rest of the galaxy, which is the stepping stone to the rest of the universe. If he is the one to get us to Mars and who enables us to establish Earth-independent colonies there, then he will have launched us in the direction of realizing this utopian world, maybe more than anyone else on Earth right now.
All of these people also embody the same kind of discriminatory attitudes that animated the eugenics movement of the twentieth century. Elon Musk has warned about global population decline, but if you look closely at some of his tweets, it seems that he’s more explicitly worried about white populations declining. Several months ago, someone tweeted that the white demographic globally is about eight percent of the world population, and Musk’s reply to this was something to the effect of “and declining fast.” There is very much a racial component to his anxiety about not just immigration, but population decline. Musk basically thinks that white people are superior, and hence it would be very bad for the future of humanity and post-humanity if the white population were to decline.
In his long interview with Ross Douthat of the New York Times in June, Peter Thiel was asked by Douthat if he wanted the human race to endure. And Thiel hemmed and hawed before reluctantly saying yes, but it didn’t seem like he really believed it. And then he brought up transhumanism in the next sentence or two.
What is it these people want? Are they indifferent to or even welcoming of human extinction?
There are two essential things for people to understand about what’s happening in Silicon Valley. One is that the TESCREAL worldview is ubiquitous. It’s the water these people swim in and the air they breathe. So you just cannot understand what’s going on in Silicon Valley, especially with respect to the race to build AGI, without some understanding of transhumanism, longtermism, and all of these TESCREAL ideologies.
The second really important thing for people to understand is that a key component of the utopian vision at the heart of TESCREALism is a pro-extinctionist stance. Utopia looks like a world in which post-humans, not humans, are the ones who rule the world. When Peter Thiel hesitated, he was just channeling this pro-extinctionist component of the TESCREAL worldview. It’s not humanity, it’s post-humanity that is going to ultimately go out and colonize space, that’s going to run the show.
Peter Thiel has a somewhat unique view that is a minority within Silicon Valley, according to which the post-human beings that should eventually replace us ought to be extensions of our biological bodies. So he’s explicitly said that he wants to live forever, but he also wants to keep his body. He doesn’t want to upload his mind to a computer to become some digital being.
This contrasts with a slightly different version of pro-extinctionism that a lot more people in Silicon Valley hold, according to which the future is digital. You could distinguish between Peter Thiel’s biological transhumanism and what might be called digital eugenics. Digital eugenics is a version of pro-extinctionism that says those future post-humans that replace us should all be digital in nature.
Either they could be uploaded to human minds or they could be entirely separate, autonomous, distinct entities like ChatGPT is. So you could imagine ChatGPT 10 or whatever, achieving the level of AGI and then taking over. In this sense, AGI is not an extension of us — it’s something completely separate. We didn’t become it, we just created it. Whereas Peter Thiel has this vision of post-humanity as something that we become rather than create. The key point, though, is that they’re all pro-extinctionist.
Now, as you describe it, this does not sound like a popular agenda. So is there room for coercion in their worldview?
Is there room for coercion? Yeah, I think so.
That’s a deliberately naive question.
Their vision of the future is not inclusive. If it were an inclusive future, it would also include humans. It doesn’t. It’s a future for post-humans.
It is also deeply elitist and extremely undemocratic. Right now, with their billions and trillions of dollars, they are trying to create a new world run by post-humans without ever having inquired about the opinions and preferences of the rest of humanity. They’re doing this without our consent, and they don’t really care one bit about what the rest of us have to say.
They believe in their vision, and they’re going to try to bring it about regardless. They truly believe that this is the right thing to be doing. There is zero input from the rest of humanity about what our collective human future ought to look like. You could describe this as profoundly coercive.
The Silicon Valley guys are the super intelligence for the moment, so who needs everyone else’s opinion?
Exactly. I think that’s what they believe. The rationalist community is very influential in Silicon Valley, and what the rationalist community basically tells its members is that once you’ve mastered this or that theory within the field of decision theory, once you’ve mastered the patterns of thought that are optimally rational, you then have direct access to fundamental truths about what the future ought to look like. And because rationality provides a universal and objective perspective on the world, you don’t need anyone else’s perspective. That is exactly how these billionaires in Silicon Valley are thinking about our collective future.