The Robots Are Coming for Our Souls

The problem with artificial intelligence isn’t that robots will replace us at work but that they will make us less human.

A prosthetic expert tinkers with a lifelike robot on May 9, 2018, in Cornwall, England. (Matt Cardy / Getty Images)

I have been thinking a lot about Ted Chiang’s piece on artificial intelligence (AI) in the New Yorker. His basic argument is that AI, exemplified by large language models (LLMs) like ChatGPT that produce text and text-to-image models like DALL·E, can’t make art because it is not capable of making the specific and contextual choices that artists use their years of knowledge and experience to make. There’s a good critique of the choices framework here, and I more or less take it for granted that AI can’t make art (though it can make things that look like art), and that people who care about craft and prose and the quality of words going together one after the other are not likely to put prompts into a LLM to make their essays and poems and books.

What’s more interesting and more worthwhile to consider is why someone might want to use a LLM to write something, and in what context. Someone in an office job of any kind, someone who has to produce lists and reports and form letters, might find in a LLM a way to work less and enjoy their time more. The logic of labor exploitation under capitalism means that all workers are treated as interchangeable widgets; it follows that someone who has to survive in that system would use a tool that helps them become a better, more efficient widget with minimal effort and time, so they have more time to do the things they actually want to do.

This is rather easy to point out and hard to do anything about. The people who stand to make money-money-money-money from the output of LLMs do not care that the end-of-quarter summary report was written by “artificial” and not human intelligence. And maybe the people plugging words into the system and waiting for it to churn out that report don’t either. They’d rather be online shopping or reading a book or looking at social media or talking to their coworker or whatever. That’s fine.

The problem is that robots are going to put us out of jobs we don’t want to be doing in the first place, but we are not going to reap the benefits of that displacement. We are not going to get more time to ourselves because we can do our dumb email job fifty times faster with AI, and we are not going to get paid more because we are more efficient. (David Graeber made this argument more than a decade ago in Bullshit Jobs.)

What is more likely to happen — perhaps already happening — and what I consider tragic is that the robots will replace us not just in a practical sense, i.e., at work, but in the sense that they will make us less human. We’ll depend on them at work, and then to write a letter to our landlord, and then eventually we will be so out of the practice of thinking that we’ll need a LLM to compose a text to our girlfriend (or like in that pulled Google ad, to help our daughter write a letter to her favorite athlete). They will help us bypass a process that doesn’t just make us better artists or writers but, crucially, better people.

Thinking well, and the necessary self-reflection that accompanies and facilitates it, increases our capacity for things like empathy, generosity, and solidarity. It deepens our ability to hold two or more contradictory ideas at once. It connects us with ourselves so that we can connect with others. It makes us more thoroughly part of humanity.

Of course, make-work and toil are hardly the pinnacle of deep-thinking opportunities. But the proliferation of LLMs is making an already dire situation — in which people don’t get almost any time to think for themselves, to do things that fulfill them, that deepen their capacity for love and empathy and everything else I named above — much worse.

The employment of these models is deeply antisocial not only because they cut off communication between two humans, but also and more importantly because they cut off communication with and within the self. Their use foments an instrumentalist view of writing and even speaking, of language itself, foreclosing the possibility that we might use language as a means of discovery. When a LLM spits out a text based on a prompt, it makes something that looks like an essay or a poem but cannot be either of those things, because it communicates not the true consciousness of its creator but merely her original intent. Any writer knows that you rarely make the thing you intended to.

Writing, ideally, is a means through which to make expressible the otherwise inexpressible, a means through which to find what is otherwise hidden. I cannot talk the way I write; my ideas are for the most part inexpressible and therefore unknowable to me in speech. I think the same is true for people who wouldn’t call themselves writers.

There is something valuable in that little mystery, something that links us to each other, thereby making life worth living and the struggle for a better world worth waging. Without exposure to that mystery, we stop being part of humanity and instead become mere cogs in the completely arbitrary, exploitative system that happens to organize our working lives.