Democratic Governance of AI Is the Real Solution

The potential for catastrophic effects from the AI boom demands robust deliberation and real democratic governance. Localized initiatives like data center moratoria won't get us there.

A three-level data center under construction in Vernon, California.

Blocking data center build-outs is a tempting shortcut to slow the unnerving speed and spread of AI. But the real challenge is building democratic oversight of how it develops. (Mario Tama / Getty Images)


Should we just stop building data centers? Or at least pause for a little while? The intuitive answer seems to be yes, let’s take a breather. The Artificial Intelligence Data Center Moratorium Act, proposed by Bernie Sanders and Alexandria Ocasio-Cortez a few weeks ago, proposes a “reasonable pause.” Slowing down has also been the focus of state-level organizing, with at least twelve states introducing proposals for moratoria; Maine’s LD 307 passed both chambers and was set to be the first state-level pause until the governor vetoed it.

These efforts seek to use the power and machinery of familiar NIMBY (“not in my backyard”) politics — local opposition, tying up projects in red tape, and so on — to confront the multiple perceived threats of galloping energy demand, carbon emissions, and job loss. Successful moratoria will curb digital growth by starving it of the physical energy needed to train and operate AI models.

Counterintuitively, a moratorium on AI data centers is a terrible idea — one that poses serious equity concerns. A moratorium springs from the desire to stop the concentration of wealth, but ironically, it is likely to exacerbate it. It’s a massive strategic blunder for the Left, and we should think through the global justice implications and follow-on effects. What would happen if these moratoria moved forward?

Offshoring Burdens and Cementing Digital Divides

To begin, it’s clear that AI development would not be effectively paused by a moratorium. It will not halt development; it will simply change the geopolitics of its development, the strategies of AI companies, and who is able to access AI services.

We should be wary of proposals that would send burdens elsewhere. Under neoliberal capitalism, industries offshore environmental harms to places with weaker governance, cheaper labor costs, and fewer environmental safeguards. Compared to factories, AI data centers are going to be more difficult to offshore because of weaker power grids overseas and restrictions on chip exports. But those constraints on offshoring are not set in stone — companies may just influence chip export rules and ink bilateral deals to expand AI infrastructure abroad, as in the US–UAE partnership, or bring their own behind-the-meter power, as they are already doing in the United States.

If a better world is the goal, the answer is not merely shifting the geopolitics of AI development but reshaping it. Offshoring will put limitations on compute that will induce tech companies to raise prices, and small businesses, academic and nonprofit researchers, and individuals would be the first to lose access. Larger companies would just buy access to the top-tier AI. A moratorium will result in a business landscape that favors incumbents. This has global implications for students, small business owners, and first-generation professionals in emerging economies.

One could argue that many people already use cheaper open-weight models developed in China like DeepSeek and Qwen, and this will increase their use; these models will reflect different approaches to data privacy protocols, human rights, and economics. It’s possible that expensive US models may be another driver for countries to develop “frugal” and “sovereign AI” models that understand cultural nuance, data ownership, and privacy considerations, and in the long run, this is a compelling vision. But that’s not what has been happening. In the short run, inaccessible frontier models are set to entrench digital divides.

And these timescales matter. The alarm in tech corners of the internet over Claude Mythos Preview reveals the stakes of developing powerful AI first. Mythos is a model with cybersecurity features that are so advanced that its developer, Anthropic, did not release the model publicly, deciding instead to grant access to a consortium of companies to help them patch vulnerabilities. Though there is some chatter about how a limited model release benefits Anthropic, experts seem to deem the capabilities real. Imagine a world in which all of your data is exposed — and every organization has cybersecurity vulnerabilities. The implications go beyond identity theft and scamming to impacts on banking, health care systems, business, elections, food and energy systems, and more. AI development is a race in which pausing carries real consequences for all of our infrastructure, digital and physical.

Data Center Blocking as Class Warfare

The irony is that a lot of organizing to stop data centers is coming from wealthier communities and groups, who tend to have greater power to organize. True, we lack a rigorous study of this. But given the geography of data centers in the United States — with clusters in more affluent suburbs of northern Virginia or Columbus, Ohio, and a broader expansion into surrounding areas, where incentives and available land have given rise to large build-outs — thus far, many appear to be sited in non-disadvantaged communities.

Much of the resistance has been organized by the environmental movement, with Food and Water Watch convening a letter with 230 groups. There is a narrative of vulnerable communities standing up to big industries, illustrated by cases that do fit this model, like xAI’s exploitation and air pollution at its South Memphis data center. However, the reality on the ground — and in the anti–data center Facebook groups — points to a more complicated picture. AI data centers face a left-right opposition, as likely to boast Gadsden flags as “In This House . . . ” yard signs.

The class particulars matter. What if the picture that emerges of “data center resistance” is one of educated middle-class people — including exurban and rural residents but also professionals who work in knowledge jobs — mobilizing, consciously or not, to protect their class position from the threats AI poses? How many of these people will block data centers but end up paying for a subscription to a frontier model once it is clear how useful it is to navigate daily work and life? It’s not fair for affluent environmentalists and property owners to try to stop development of this infrastructure before most people in the world have even had a chance to work with and learn from these models.

People who are stuck in an outdated model of frontier model capabilities might dismiss the impact of data center blocking on AI access. But that reflects a poverty of imagination about what accessible, cheap intelligence can do — in terms of learning, creating software, advice, research, personal projects, and more. For example, last year, I took undergraduate courses in calculus and in programming at the state school where I also work as a professor. It’s clear that, for many subjects, the personalized tutoring offered by AI is far better than the outdated lecture-based model still employed by universities. Without radical change to our pedagogy, we risk extracting rent for credentials.

Another example: I was dealing with a complex bureaucratic immigration matter that involved documents in foreign languages, and a lawyer quoted me $3000 to solve it. Claude talked me through the steps of dealing with various offices, and ChatGPT translated the forms, saving me the expense. You can see how the latest version of the “poverty premium” is shaping up: a society where educated middle-class people like me will pay the monthly fees for these services, learning and moving through life with less friction, while people who can’t afford the subscription are stuck in the system and end up paying more. This AI-enhanced poverty premium is not a distant prospect but a few years away — and it is made more likely by a moratorium that limits computation.

Real Concerns

Part of why the moratorium push is such a dead end is because the disparate right-left coalitions that have emerged around stopping data centers have different interests when it comes to other issues. It doesn’t follow that stopping data centers will lead us toward a clean energy build-out or the social policies needed to address job displacement, such as health care for all.

As policy advocate Nat Purser has already argued in Asterisk, a pause is not a substitute for actual AI governance, and attempting to tackle all the issues though a single move makes it less likely that they get addressed. Rather than gather progressive momentum for deep, multi-issue social reform, populist anger about data centers is likely to lead to a conspiratorial para-environmentalist politics rife with concerns about electromagnetic fields and cellular damage. Or it might lead to political violence and subsequent crackdowns on activists, as the recent attacks on Sam Altman’s residence warn of.

An epistemic environment shaped by alarmist claims carries real risks for people’s health and well-being. Rather than embrace apocalyptic rhetoric, we need to be clear-eyed about the real problems the data center build-out poses, because they are mounting. Making progress on the myriad issues packaged under “AI” is going to require separate work streams.

What about the climate? This is a decarbonization planning problem. We knew we needed clean power to decarbonize our cars, buildings, and factories. The emissions and energy draw are real issues, but they need to be placed within the bigger picture of our climate challenge. Of the United States’ six billion or so tons of greenhouse gas emissions per year, around 67 million tons is from data centers. Data centers for AI specifically are projected to account for roughly 2444 million tons of CO2 by 2030, though this could be more if they end up relying on behind-the-meter gas turbines rather than connecting to the grid. The good news is that AI data centers are a lot easier to decarbonize than heavy industry. The fact that companies are desperate for power can potentially be leveraged to get them to finance some of the grid build-out that we need for decarbonization. And there is a proliferation of state legislation on data centers requiring clean energy — Minnesota’s HF 16, for example, has clean energy requirements. This is something that states can regulate.

What about the water use? This is a water resource management problem. For example, in 2024, Google’s global data center operations consumed 8.1 billion gallons — or as much as what it takes to irrigate fifty-four golf courses on average in the southwestern United States — though the majority of Google’s data centers are in places where water is abundant. In water-scarce regions, there are broader water management issues that data centers need to be contextualized within, such as trade-offs with irrigated lawns and agriculture. Where water is not scarce, states can oversee development and create water-permitting requirements. Or, they can mandate that companies use the most water-efficient technologies. Again, these are not pie-in-the-sky ideas, but concrete measures that legislative proposals are already exploring or enacting.

What about AI crashing the economy, either through a bubble or through labor displacement? This is where donors and organizations should be concentrating resources, at an emergency scale. The data center build-out is propping up the entire US economy right now. Hyperscalers are projected to spend an equivalent of 2.1 percent of US GDP on it this year, making it a larger capital outlay than railroad construction, the highway system, or the space program. At the same time, circular finance structures are leading to serious concerns about systemic risks. However, if AI companies do not crash out but successfully monetize their products, then there are the labor displacement issues.

The fact that OpenAI just launched a report on the need for industrial policy to address the social and economic disruptions from AI — framed in terms of “starting a conversation” (about a decade too late) — highlights how open the space for serious policy ideas still is. From public wealth funds to efficiency ideas, there are actually some good concepts in OpenAI’s report and in the wider policy debate, though OpenAI has a history of blocking many of these proposals.

Many of these challenges have some proposed legislation already drafted. But all of them need more specification, public deliberation, and progressive leadership. The people should be driving this discussion, not companies like OpenAI. The funders and organizers in environmental groups leading data center blocking efforts should put their attention toward a broader set of solutions — including public engagement and education on the technology, the stakes, and the policy options — and not be seduced by the simple lure of dead-end, inequitable data center moratoria.