The Problem With AI Is About Power, Not Technology

Artificial intelligence has the potential to seriously harm workers — not because of something inherent to the technology, but because bosses are in control of it.

While technologies like ChatGPT might seem poised to replace white-collar workers, employers are more likely to use machine learning to break up and deskill jobs. (Olivier Morin / AFP via Getty Images)

The material changes ushered in under the aegis of artificial intelligence (AI) are not leading to the abolition of human labor but rather its degradation. This is typical of the history of mechanization since the dawn of the industrial revolution. Instead of relieving people of work, employers have deployed technology — even the mere idea of technology — to turn relatively good jobs into bad jobs by breaking up craft work into semiskilled labor and by obscuring the labor of human beings behind a technological apparatus so that it can be had more cheaply.

Employers invoke the term AI to tell a story in which technological progress, union busting, and labor degradation are synonymous. However, this degradation is not a quality of the technology itself but rather of the relationship between capital and labor. The current discussion around AI and the future of work is the latest development in a longer history of employers seeking to undermine worker power by claiming that human labor is losing its value and that technological progress, rather than human agents, is responsible.

AI Is Not a Specific Technology

When tech entrepreneurs speak of AI doing this or AI doing that — like when Elon Musk promised former British prime minister Rishi Sunak a coming age of abundance in which no one will need to work because “AI will be able to do it all” — they are using the term AI in a way that occludes more than it clarifies. Academic researchers in the field of AI, for example, do not generally use the term AI to describe a specific technology. It is, quite simply, the practice of making “computers do the sorts of things that minds do,” as defined by Margaret A. Boden, an authority in the field. In other words, AI is less a technology and more a desire to build a machine that acts as though it is intelligent. There is no single technology that makes AI distinctive from computer science.

Much of the current discussion around AI centers on the application of what are known as artificial neural networks to machine learning. Machine learning refers to the use of algorithms to find patterns in large datasets in order to make statistical predictions. Chatbots like ChatGPT are a good example. (A chatbot is a computer program that mimics human conversation so that people can interact with a digital device as if they were communicating with a human being.) Chatbots work by using an immense amount of computational power and very large amounts of data to weigh the statistical likelihood that one word will appear next to another word.

Machine learning generally relies on designers to help the system interpret data. This is where artificial neural networks come into play. (Machine learning and artificial neural networks are only two tools under the general umbrella of AI.) Artificial neural networks are linked software programs (each individual program is called a node) that are each able to compute one thing. In the case of something like ChatGPT (which belongs to the category of large language models), each node is a program running a mathematical model (called a linear regression model) that is fed data, predicts a statistical likelihood, and then issues an output. These nodes are linked together and each link has a varying weight, that is, a numerical rating indicating how important it is, so that each node will influence the final output to a different degree. Basically, neural networks are a complex way of taking in many factors simultaneously while making a prediction to produce an output, such as a string of words as the appropriate response to a question entered into a chatbot.

This imitation is a far cry from human consciousness, but researchers do not understand the mind well enough to actually encode the rules of language into a machine. Instead, they have chosen what Kate Crawford, a researcher at Microsoft Research, calls “probabilistic or brute force approaches.” No human being thinks this way. Children, for example, do not learn language by reading all of Wikipedia and tallying up how many times one word or phrase appears next to another. In addition, these systems are particularly energy intensive and expensive. The cost for training ChatGPT-4 came in at around $78 million; for Gemini Ultra, Google’s answer to ChatGPT, the price tag was $191 million. Human beings, it should be noted, acquire and use language much more cheaply.

In standard machine learning, human beings label different inputs to teach the machine how to organize data and weigh its importance in determining the final output. For example, many people (paid very poorly) “pre-train” or teach computer programs what things look like, labeling pictures so that a program can differentiate between, say, a vase and a mug. (In a system doing “deep learning,” human beings play a much smaller programming role. With deep learning, the artificial neural networks in use have more layers than in classical machine learning, and human beings do much less labeling of the elements in a dataset. In other words, it can be fed much rawer, unprocessed data and still organize it.)

The GPT in ChatGPT, it is important to note, stands for generative pre-trained transformer, a transformer being a kind of neural network. In the case of ChatGPT, the program was pre-trained by human beings to teach and correct the program as it was fed astronomical amounts of data, mostly written text. In fact, according to the Guardian, contract workers in Kenya employed by OpenAI to train ChatGPT earned between $1.46 and $3.74 an hour to label text and images featuring “violence, self-harm, murder, rape, necrophilia, child abuse, bestiality and incest.” Several workers claimed that these working conditions were exploitative and requested that the Kenyan government launch an investigation into OpenAI.

Thus, AI, as Boden elaborates, “offers a profusion of virtual machines, doing many different kinds of information processing. There’s no key secret here, no core technique unifying the field: AI practitioners work in highly diverse areas, sharing little in terms of goals and methods.” Contemporary use of the term AI, however, tends toward black-box discussions of material changes, mystifying the technology in question while also homogenizing many distinct technologies into a single revolutionary mechanism — a deus ex machina that is monolithic and obscure. This effect is not accidental. It serves the interests of capital, and it has a history.

AI and Labor Degradation

AI, in other words, is not a revolutionary technology, but rather a story about technology. Over the course of the past century, unions have struggled to counter employers’ use of the ideological power of technological utopianism, or the idea that technology itself will produce an ideal, frictionless society. (Just one telling example of this is the name General Motors gave its pavilion at the 1939 World’s Fair: Futurama.) AI is yet another chapter in this story of technological utopianism to degrade labor by rhetorically obscuring it. If labor unions understand changes to the means of production outside the terms of technological progress, it will become easier for unions to negotiate terms here and now, rather than debate what effect they might have in a vague, all too speculative future.

The uses that employers have made of machine learning and artificial neural networks conforms with the long history of the mechanization of work. The Marxist political economist Harry Braverman’s labor degradation thesis, in which industrial capitalist development tends toward the breakup of craft work, the broader diffusion of the detailed division of labor, and the application of factory regimes to ever more kinds of work, still holds. If anything, managerial use of digital technologies has only accelerated this tendency. Moritz Altenried, a scholar of political economy, recently referred to this as the rise of the “digital factory,” combining the most overdetermined, even carceral, elements of traditional factory work with flexible labor contracts and worker precarity.

Employers have deployed the use of algorithms to exert immense control over the labor process, using digital platforms to break up jobs and surveil how quickly workers complete those tasks, as with Amazon’s use of algorithms to push warehouse workers, or hail-riding apps speeding up drivers. Digital platforms have allowed employers to extend factory logic practically anywhere. Here, we can see the most “revolutionary” aspect of the technological changes referred to as AI: the mass diffusion of worker surveillance. While digital platforms are not particularly good workers, they are very effective bosses, tracking, quantifying, and compelling workers to labor according to the designs of their employers.

Arguing that machine learning is not categorically different from earlier forms of mechanization is not to say that everything will be fine for workers. Machine learning will continue to aid employers in their project to degrade work. And like earlier forms of mechanization — including the computer-mechanization of white-collar office work since the 1950s — employers have set their sights on turning skilled, white-collar jobs into cheaper, semiskilled jobs. In the second half of the twentieth century, computer manufacturers and employers introduced the electronic digital computer with the aim of reducing clerical payroll costs. They replaced the skilled secretary or clerk with large numbers of poorly paid women operating key-punch machines who produced punch cards to be fed into large, batch-processing computers.

The result was more, not fewer clerical workers, but the new jobs were worse than what had existed before. The jobs were more monotonous, and the work was sped up. In the last quarter of the twentieth century, employers successfully persuaded middle managers to do clerical labor for themselves (what one consultant called the “bourgeoisification” of clerical work) by giving them desktop computers to do their own typing, filing, and correspondence — work that the company once paid clerical workers to do. This style of job degradation remains typical in white-collar work today.

While technologies like ChatGPT might seem poised to replace ostensibly white-collar workers like screenwriters, employers are far more likely to use machine learning to break up and deskill jobs in the same way that they deployed older forms of mechanization. Last year, Google pitched a machine learning chatbot named Genesis to the New York Times, the Washington Post, and NewsCorp. A spokesperson for Google acknowledged that the program could not replace journalists or write articles on its own. It would instead compose headlines and, according to the New York Times, provide “options” for “other writing styles.” This is precisely the kind of tool that, marketed as a convenience, would also be useful for an employer who wished to deskill a job.

Like older forms of mechanization, large language models do increase worker productivity, which is to say that greater output does not depend on the technology alone. Microsoft recently aggregated a selection of studies and found that Microsoft Copilot and GitHub’s Copilot — large language models similar to ChatGPT — increased worker productivity between 26 and 73 percent. Harvard Business School concluded that “consultants” using GPT-4 increased their productivity by 12.2 percent while the National Bureau of Economic Research found that call-center workers using “AI” processed 14.2 percent more calls than their colleagues who did not. However, the machines are not simply picking up the work once performed by people. Instead, these systems compel workers to work faster or deskill the work so that it can be performed by people who are not included in the study’s frame.

For example, in their recent strike, members of the Writers Guild of America (WGA) demanded that movie and television studios be forbidden from imposing “AI” on writers. Chatbots are not currently capable of bodily replacing writers. Rather, it seems more likely that studios would deploy machine learning systems to break up their jobs into a series of discrete tasks, and through the division of labor turn the job of “writer” into smaller, more cheaply paid positions in which writers were now either prompt engineers feeding scenarios into the machine, or finishers, polishing machine-made scripts into a final product. The WGA’s recent contractual wins regarding AI are limited to the protection of credits and pay, although they had initially set out to reject the use of large language models completely. That bargaining position was actually somewhat unique; since the middle of the twentieth century, unions have generally been unable — due either to weakness or ideological blinders — to treat technology as something open to negotiation.

Examples are also rife of employers deploying “AI” not only to break up jobs but also to obscure the presence of poorly paid human workers, many of them based in the Global South. In the words of sociologist Janet Vertesi, “AI is just today’s buzzword for ‘outsourcing.’” Take, for example, Amazon’s “Just Walk Out” system at its brick-and-motor stores, where customers shopped and walked out without having to go to the cash register because the payment was processed digitally. Amazon has admitted that the “generative AI” that it used to tally up customer receipts actually consisted of workers in India watching surveillance footage and manually drafting itemized bills.

In a similar case, several major French supermarket chains boasted that they were using “AI” to spot shoplifters when the surveillance was being conducted by workers in Madagascar watching security footage and earning between ninety and one hundred euros a month. Same again with so-called “Voice in Action” technology (whose manufacturer claims it is an “AI-driven” system) that took customers drive-through orders at US fast food restaurants; more than 70 percent of the orders were in fact processed by workers in the Philippines. The anthropologist Mary Gray and senior principal researcher at Microsoft Siddharth Suri have usefully dubbed this practice of hiding human labor behind a digital front, “ghost work.”

AI and Ideology — Automation Discourse Redux

But, as mentioned earlier, it would be a mistake to think of AI in primarily technological terms — either as machine learning or even as digital platforms. This brings us to the automation discourse, of which the recent AI hype is the latest iteration. Ideas of technological progress certainly predate the postwar period, but it was only in the years after World War II that those ideas congealed into an ideology that has generally functioned to disempower working people.

The ur-version of this ideology was the automation discourse that arose in the United States in the years following World War II, which held that all technological change bent toward the inevitable abolition of human labor, in particular, blue-collar industrial labor. It was the immediate product of two interlocking phenomena. First, the new institutional strength of organized labor coming out of the militant 1930s, which posed a threat to capital; second, the remarkable technological enthusiasm of the postwar era. Since the 1930s, corporate America had sought to portray itself and its products as of themselves producing the kind of utopian future that left radicals had long associated with political revolution. (For example, the DuPont corporation promised “revolutionary” changes and “better things for better living . . . through chemistry,” instead of, say, the redistribution of property.)

Victory in World War II, government-funded technological breakthroughs, and the resulting economic boom seemed to ratify this argument. In the words of Business Week in 1955, there was “a sense that something new and revolutionary was being born in the laboratories and the factories.” It therefore seemed reasonable to actors from across the political spectrum — from industry leaders to union officials to members of the student movement, and even some radical feminists — to think that perhaps American technology could overcome those most painful hallmarks of industrial capitalist production: class struggle and workplace alienation.

Playing into this general sense, a vice president of production at the Ford Motor Company coined the word “automation” to depict the company’s policy to fight unions and degrade working conditions while it retooled as a product of the apolitical and inevitable development of industrial society itself. Ford, and soon practically everyone, depicted “automation” as a revolutionary technology that would fundamentally (and inexorably) change the industrial workplace. The definition of automation was notoriously vague, but still many Americans genuinely believed it would, entirely of its own account, usher in abundance, while doing away with the proletariat and, in the words of sociologist and celebrated public intellectual Daniel Bell, replace it with a highly skilled white-collar “salariat.”

Across industries, however, what managers and workers referred to as automation just as often resulted in degraded and sped-up work as it did the substitution of human labor with machine action. And yet, for the most part, labor found itself both rhetorically, and to a certain extent intellectually, cowed by the automation discourse. At a 1957 meeting of senior officials representing ten of the largest unions in the United States at the time, Sylvia Gottlieb, the director of education and research for the Communications Workers of America (CWA), summed up the problem: they were unsure whether or not that automation was not the technological revolution that capital said it was, and they needed to take care against “the labor movement becoming identified as ‘weepers’ on this subject,” that is, prophets of doom opposed to technological progress, or even worse, Luddites. Gottlieb concluded that it made sense “to point not only to the problems and difficulties of automation but to acknowledge the tremendous benefits it provides.”

Part of the power of the automation discourse was that it spoke to a techno-progressivism that, even to this day, appeals to certain tendencies on the Left, like the so-called Marxist accelerationists who believed that the development of industrialization itself would produce the conditions for a proletarian revolution. At the very least, in the years immediately following World War II, the idea of autonomous technological progress offered the Walter Reuther administration and the United Auto Workers (UAW) cover for the Treaty of Detroit’s retreat on the question of “production standards,” that is, a say over which machines would exist on the shop floor and how workers would use them. Union officials did not know what “automation” would bring, and they largely failed to disentangle teleological stories of technological progress from management’s attempts to control the labor process.

The International Longshore and Warehouse Union (ILWU) under Harry Bridges was unique among postwar unions in that it managed to operate within the confines of postwar technological optimism and still get something for its members, letting containerizing shippers buy the union out of dockworker jobs in exchange for generous retirement benefits. Still, this buyout came at the price of a generation of dockworkers (the so-called B-men) who were not eligible for those benefits but whose labor remained particularly sweated. Still, the ILWU was the exception.

More typical was the fate of the United Packinghouse Workers of America (UPWA), which at first allowed the company to “automate” (i.e., to bring in power tools) in exchange for somewhat improved retirement benefits and the right to transfer jobs. Workers laid off as a result of labor speedup were advised to take part in job training programs that the UPWA’s president would later condemn. “What you were doing,” he said, “was training people so that they could be unemployed at a higher level of skill, because they couldn’t get jobs.” As the industry reformed in the second half of the twentieth century, the union disintegrated. Today, meatpacking remains a labor-intensive industry, although now much of it is nonunion.

Practically speaking, “AI” has become a synonym for automation, along with a similar if not identical set of unwarranted claims about technological progress and the future of work. Workers over the better part of the past century, like most members of the general public, have had a great deal of difficulty talking about changes to the means of production outside the terms of technological progress, and that has played overwhelmingly to the advantage of employers. The notion of technology as, ultimately, a benefit to all and inevitable, even as civilization itself, has made it difficult to criticize. If history is any guide, workers need to reject the teleological claims that capital makes about technology; they themselves must see technological change, not as the organic unfolding of civilization, but as just another aspect of the workplace that should in principle be subject to democratic governance.

AI is not a specific technology. Often enough, it is a story about technology, one that serves to disempower working people. Workers have reason to fear AI, but not because it is in and of itself revolutionary. Rather, workers and organizers should worry because the idea of AI allows employers to pursue some of the oldest methods of industrial labor degradation. In the past, unions have suffered when they took the technological claims of their employers as fact. For labor, it might quite literally pay to refuse to be impressed by technological utopianism.

It behooves labor to divorce specific material changes to the labor process from grand narratives of technological progress. Working people should have a say in what kinds of machines they use on the job; they should have some control. The first step in that direction requires that they be able, at the very least, to say “no” to the material changes employers seek to make to their workplaces, and to say it without thinking of themselves as impediments to progress.