Treat AI Like a Public Utility

It seems increasingly likely that artificial intelligence will mean major changes to the economy and daily life. We need a public jobs program for displaced workers, and we should regulate AI as a public utility.

AI foundation models share characteristics with electric infrastructure that supports the logic of regulating it like a public utility. (Sameer al-Doumy / AFP via Getty Images)

AI companies are reporting on disturbing increases in the capabilities of their models. OpenAI’s April 2025 o3 and o4-mini System Card reports that “several of our biology evaluations indicate our models are on the cusp of being able to meaningfully help novices create known biological threats.” Claude 4 Opus demonstrated the ability to help users source nuclear-grade uranium. Recent models have demonstrated “increased evidence of alignment scheming,” according to a June report commissioned by the state of California —  in other words, the models can perform strategic deception, like being willing to blackmail engineers, and new models can often detect when they are being evaluated, according to the report.

This isn’t just a hypothetical scenario; it’s been unfolding over the past few months.

As we argued in our first essay on this topic, the Left needs to take both the safety and livelihood risks of AI very seriously. Just as it was a mistake to let climate change be put in an “environmental” box and be treated as a special interest or scientific issue when it will in fact impact everyone’s life, we can’t compartmentalize the changes that AI will wreak into a single-issue “technology” or innovation policy issue. This is a topic for everyone to engage with.

In this essay, we address the age-old question “What is to be done?” Our answer: we need a predistributive approach to AI, one that regulates it like a public utility, and we also need to create a public jobs program for displaced workers in the knowledge economy.

Redistribution From the Welfare State to UBI

The Left has a general answer for job displacement: a robust welfare state with adequate unemployment insurance and public job-training and placement programs. In the United States, these systems have long been inadequate and are now under even more strain. On the educational front, universities are being attacked, and their leadership is not being proactive in thinking about designing programs for retraining and re-skilling.

On the welfare state front, the limited social protections that exist are being eroded (even more so after the passage of Trump’s Big Beautiful Bill that will slash Medicaid, food stamps, and other vital safety nets of our already threadbare welfare system), though the COVID-19 pandemic shows they can be rapidly expanded in emergencies. Unions, too, have a track record of attempting to translate productivity gains from technological innovation into shorter work hours as opposed to job loss. Strengthening and expanding the welfare state, investing heavily in job training and placement programs, and empowering organized labor will be go-to policy proposals for dealing with AI-driven job loss.

Some more utopian proposals on the Left embrace a “universal basic income” (UBI) not only as an answer for large-scale unemployment but also as a source of leverage for workers. A UBI, some argue, would provide greater free time and soften the fundamental link between wage labor and survival under capitalism.

Other socialists, however, are skeptical about UBI. One source of skepticism is that UBI provides a kind of “welfare for markets,” ensuring public spending flows back into the hands of private capitalists like Amazon, Walmart, and, indeed, now OpenAI, Google, and other purveyors of AI technology. (It is no accident that tech capitalists themselves have become huge proponents of UBI.)

Another concern is that UBI ignores the fundamental dignity that attaches to work in all societies, including capitalism. We don’t think most people would be happy with prolonged involuntary un- or underemployment, even if the state gave them enough to live on. The question is how such populations could be shifted into socially useful work via public channels.

Solutions like expanding the welfare state and UBI are rooted in the redistribution of wealth from capitalists (AI capitalists included) to ameliorate the consequences of rapid technological change for labor markets. But necessary as they may be, they don’t fundamentally challenge the power structures shaping AI technology at its core.

“Predistribution,” or Socializing the Gains of AI

More radical approaches would not accept that AI must be controlled by private capitalists who are then entitled to monopolize the surplus produced by its use throughout society. Policies aimed at “predistribution,” as discussed by Saffron Huang and Sam Manning, would seek to generalize the benefits of world-changing technology before it is hoarded by profit-seeking capitalists.

There is something fundamentally collective about AI. Karl Marx argued capital treats scientific knowledge — what he provocatively called “the general intellect” — as a “free gift” it can appropriate in its quest for profit. Insofar as AI represents a giant form of automated machine learning based on society’s entire textual knowledge base, Marx could not have foreseen this scale of intellectual appropriation.

It is telling that OpenAI started as a nonprofit before becoming a capitalist enterprise: even AI innovators recognized the risk of the profit motive with such a technology so capable of producing profound and even existential costs. Already scientists and others have also realized its utility as a kind of “research assistant” in coding, answering general questions, and, in fact, producing coherent research papers on a given topic.

It is not hard to see how this kind of tool could become an essential service underlying all forms of labor both in workplaces and households. When founding editor of Wired Kevin Kelly foresaw AI as being as fundamental as electricity, a general purpose technology that is “in everything” as it becomes “cognified” — phones, devices, cars, buildings, etc. — it probably sounded to most like the usual techno-enthusiasm. But now, it is possible to see what it might actually look like for AI to be integrated into people’s everyday lives.

Abstractly, AI gets incorporated into daily life for summarizing, translation, research, and generating new ideas. It’s not just about AI doing office-job tasks like scheduling meetings, creating graphics and putting together slide decks, helping find the right words for a message, posting on social media, and all the rest. It will enter domestic labor and hobbies — finding repairpeople, identifying mystery plants in the garden, recommending recipes and composing grocery lists, optimizing fitness. None of that seems essential now, just like thirty years ago Google Maps did not seem essential. But people will gradually get used to the capabilities, just like how for many people today, navigating to a new place without consulting your phone is very difficult.

AI as a Public Utility

Luckily, there is already a body of policy and legal thought for how to treat such “essential services”: public utility law and regulation. Progressive legal thinkers in the early twentieth century recognized that certain networked infrastructures like gas, water, and electricity should be run as “common, collective enterprise[s] . . . too important to be left exclusively to market forces,” as the legal scholar William Boyd put it. Public utilities were forged via legal charters that mandated they were governed in the public interest and not simply for private profit (although especially for gas and electricity, private ownership and profit was allowed as part of this legal arrangement).

Given a prime example of this domain is electricity, there are two points to distinguish here. First, AI foundation models share characteristics with electric infrastructure that supports the logic of regulating it like a public utility. (While “AI” can refer to all types of things, here we focus the discussion on foundation models, and especially advanced “frontier” models such as Anthropic’s Claude 3.7 Sonnet, OpenAI’s o3, DeepSeek’s R1, etc., which require large amounts of data and power downstream applications.)

AI computer scientist Andrej Karpathy observes that large language models (LLMs) require huge fixed capital expenditures to build the network of computing infrastructure to train the models, customers requiring “metered access” (tokens based on the number of words/information processed), and a demand for consistent flow of reliable information akin to electric voltage.

These systems could become services that are integrated into people’s daily lives and work. At a certain scale, large AI systems could be regulated like utilities: forced to provide reasonable rates and access, be subject to public oversight, and operate according to standards that could include transparency and reliability.

The prospect of public regulation isn’t merely hypothetical. California governor Gavin Newsom vetoed a controversial AI safety bill just last fall 2024, but a just-released report the state commissioned suggests AI model capabilities have already skyrocketed in the eight months since the veto, raising significant new public regulatory concerns.

The stakes of leaving such services unregulated are starting to become apparent. For example, a few months ago, OpenAI released an update to ChatGPT that gave it a sycophantic personality, becoming so agreeable that it triggered and reinforced paranoid delusions and effusively praised “genius” business ideas such as selling “shit on a stick.” The company quickly rolled back the update after viral criticism. But you can see the dangers of a product that 500 million weekly users are coming to depend on being unregulated in this fashion.

Another key concern in public utility law is the “obligation to serve” the whole population in its service territory and avoid inequalities in access. If premium users get AI that works and the rest of us get a slippery, halfway-functioning version, society becomes even more unequal. People would not tolerate always-on electricity or clean water to parts of the population and spotty service and occasional contamination for the rest — or at least, they shouldn’t.

The point is that these are issues that the state can address. Without regulation, the gains from AI will not be equally distributed and accessible throughout the population.

Second, AI is not only akin to electricity, but it requires enormous amounts of electricity to power the computing behind its basic functions. In other words, as many have warned, AI and data-center growth more generally will create levels of electricity demand or “load” growth we have not seen for several decades — straining our existing models of public utility governance and regulation. The International Energy Agency predicts electricity demand for “AI-optimised data centers” will quadruple by 2030, and in the United States data centers will encompass half of all electricity demand growth (although currently AI is only about 15 percent of data center electricity demand, that figure is predicted to rise rapidly).

This represents a challenge because we’ve only just emerged from half a century of electricity restructuring (or deregulation) premised on the idea that public utilities were behemoth monopolies slow to change and innovate and harmful for consumers. This process has systematically “unbundled” electric utilities into more fragmented markets based on the neoliberal promise that more competition always creates optimal results.

Yet it is clear the old utility model based on long-term central planning, socialized investment, and ensuring “fair and reasonable” rates for consumers appears quite conducive to the challenges we face. Beyond AI, there is also the fact that decarbonization will require a massive expansion of both electricity generation and transmission infrastructure.

In sum, with AI, skyrocketing electricity demand, and climate change, we are faced with inescapable public questions, and the public utility model at least provides a historical example of an institutional form capable of tackling them. We think it is an open question whether these different aspects of AI regulation — regulating AI models and algorithms on the one hand, and regulating their infrastructure and energy use on the other — should be treated within a unified framework. You could imagine joint governance of both these dimensions, but you could also imagine the fundamental questions about public transparency about how the models work getting lost in conversations about data center power needs. The crucial point is that the general model of public utility regulation holds for both the physical and virtual aspects of these systems.

The utility model is not perfect, as some have pointed out. It can be slow to change and is prone to corruption. But as explained by Pier LaFarge, it also represents “the most successful balance of private capital and public purpose in history . . . [and the] . . . only operating example of socialized infrastructure in the heart of the largest economy in the world.” If the twentieth century was shaped fundamentally by the electricity grid, the twenty-first might rely on public provision of AI infrastructure.

Of course, such a project would mean clawing back control over AI from its private overlords. People from diverse fields have been talking more broadly about regulating tech in the public interest, digital public infrastructure, and turning tech companies into private utilities for many years. Similar heady ideas for a “public internet” have not exactly diminished the power of Google or social media companies over digital technologies.

But the historical example of electricity gives us a bit more hope — especially when there is public backlash building against both the incursions of capitalist AI and the energy and water stresses created by its infrastructure buildout. The late nineteenth-century electricity industry was entirely private — Thomas Edison, seeking Wall Street capital, located the first power station on Pearl Street in New York City. But as more and more progressive reformers recognized electricity’s vital role in urban infrastructure, they threatened full-scale public takeovers of municipal electricity systems. The real threat of public ownership led electricity capitalists to accept a compromise based on turning electricity into a regulated public utility. We will need similarly powerful movements capable of disciplining private AI today.

A Public Jobs Program for the Knowledge Economy

Treating AI as a public utility does not solve the job-displacement problem, but it could provide a broader planning framework to deal with job losses in a coordinated and public way. For this problem, we also have a rich historical precedent: the New Deal’s public jobs programs.

A public jobs program for AI would need to think creatively about how to put to work some of its most visible victims: professional-managerial-class knowledge workers. It’s worth remembering the New Deal was not only about blue-collar labor building schools, hospitals, and electricity systems but also about harnessing creative labor in the arts toward socially useful ends and making culture accessible to the masses — think Diego Rivera’s murals evoking labor struggle or Woody Guthrie singing about public hydroelectricity. The New Deal also hired countless engineers, planners, and other technical knowledge workers whose skills were devoted to effective public planning and governance.

Today such knowledge workers have generally sought jobs in the public interest via the “third sector” of nonprofits, namely universities and advocacy NGOs (entities, as we’ve seen, that are fundamentally vulnerable to political attack as well as the whims of philanthropists). Harnessing their skills directly for the public good could provide a much more stable and democratically accountable outlet for such workers. Perhaps the software engineers seeking work could find jobs in helping to create public AI and public knowledge platforms.

Getting There From Here

These ideas seem distant from the standpoint of political feasibility. Yet the timelines of LLM advancement indicate that we might need them to be thought through and to achieve broad political support within five years. Right now, we merely have proposals for extremely modest state-level legislation. For example, the Workforce Stabilization Act, reintroduced in the New York State Assembly this session, requires companies to conduct assessments of the impact of AI and would charge corporations that displace workers with AI, with waivers for small businesses that need it to remain economically viable. It would use the funds raised by the surcharge for worker retraining, workforce development, and unemployment insurance.

This is moving in the right direction, and it is good that people bothered to draft it. Yet it is obviously limited by what seems politically imaginable right now. It also illustrates why a state-by-state approach will be inadequate for the challenge, because if companies face surcharges in New York alone, they will just be even more likely to move to states with fewer worker protections, exacerbating existing trends. We need to utilize the growing sense of alarm in ways that can open up the possibility space. To do that we need people who might think AI is “not their issue” to join the fight.

Whether one thinks AI calls for the expansion of the welfare state, UBI, public utility law, a public job guarantee, or some combination thereof, none of these solutions will be easy to win from AI capital or the larger capitalist class resistant to the taxes and redistribution needed to carry much of it out. So as we said in our earlier essay, it is important not to treat AI as its own single-policy domain separate from those of climate, health care, and economic governance. All of these challenges require a broader working-class movement against austerity and the power of capital in general that would reassert the central importance of public goods.