The Hollow Crown of ChatGPT’s Head Honcho
Sam Altman may be the reigning king of the AI boom, but the story that matters isn’t his rise or fall. The sector will still demand scale, speed, and the right to run roughshod over the pesky public interest, no matter who wears the industry crown.

There is some debate as to what extent Sam Altman and OpenAI were ever truly devoted to the vision of a democratized AI utopia. (Anna Moneymaker / Getty Images)
Last week in the New Yorker, Ronan Farrow and Andrew Marantz profiled OpenAI chief Sam Altman. The piece opens with the company’s chief scientist, Ilya Sutskever, doubting that Altman is the man to have his “finger on the button” of an artificial intelligence more intelligent than human beings.
What follows is the story of Altman’s fall, return, and future, including the details of the key players involved and capital at stake. The profile offers a comprehensive history of the moment, including the anxieties that attend the rise of OpenAI and what that means for us as we sort out what to do about — and with — artificial intelligence.
With AI, the stakes are high for everyone, and the story at hand is both new and familiar. As Farrow and Marantz write:
OpenAI has since become one of the most valuable companies in the world. It is reportedly preparing for an initial public offering at a potential valuation of a trillion dollars. Altman is driving the construction of a staggering amount of AI infrastructure, some of it concentrated within foreign autocracies. OpenAI is securing sweeping government contracts, setting standards for how AI is used in immigration enforcement, domestic surveillance, and autonomous weaponry in war zones.
AI: A One-Stop Shop, but for What?
Altman and others have sold AI as the solution. To what? To whatever. To everything. If you’ve got a problem, AI will solve it. Farrow and Marantz quote Altman’s own writing, offering AI as a wonder capable of “astounding triumphs” that include “fixing the climate, establishing a space colony, and the discovery of all of physics.” It’s a tall order. Still, it serves as a reminder that artificial intelligence technology — however overblown — holds a great deal of promise.
It also presents peril that has nothing to do with the threat of the rise of Skynet. Workers risk losing power, both political and economic. It’s not obvious that any state has a plan for what comes after a double-digit percentage of the workforce is turfed by AI.
Farrow and Marantz’s profile of Altman is remarkable for its depth and humanity. It does what a good profile should: offers details and a narrative, assesses its subject without either drinking the Kool-Aid or setting out to do a hatchet job for its own sake. The top-line takeaway is that Altman’s tenure at the company is controversial to say the least. This controversy reflects not only battles over his character as a human and leader but also competing visions for what AI is for. It raises the question of how far we should allow a company to take its development without sufficient guardrails — in other words, regulations.
Bristling at regulations is a classic tech industry tale — think, for instance, of Uber. Businesses in general tend to resent regulation, which is to say constraint, except in the limited circumstances in which it serves as an advantage for established firms that are looking to set up barriers to entry for would-be competitors. Even in cases where a company begins with ostensibly altruistic aims, working for the “good” of humanity, as OpenAI did in its initial incarnation as a nonprofit, market logic tends to assert itself. The profile notes that there is some debate as to whether or to what extent Altman and OpenAI were ever truly devoted to the rosier vision of a democratized AI utopia. In the end, that concern is beside the point.
Enter: The Pickle Barrel
In moral philosophy, there’s an analogy that helps to explain how good people go bad: the process is the same by which a cucumber becomes a pickle. The cucumber goes into the barrel of brine and, over time, it gets pickled. Once in the barrel, there’s not much the cucumber can do about it. The trick is to stay out of the pickle barrel in the first place.
Humans, unlike cucumbers, have agency. They can choose to go into the barrel or not, or at least in theory, to leave it. But that’s easier said than done — especially if you become, to mix metaphors a bit, a true believer in the process. And what happens if everyone keeps jumping into the barrel?
The development of AI is heavily capitalized and privatized with investments pushing well over a trillion dollars and counting. That’s a big, expensive pickle barrel. Those who invest in AI aren’t doing it for fun or sport or charity but to generate returns and transform the economy through tools and processes that will, you guessed it, produce or enhance profits. AI endeavor may have its star players, but as an undertaking, it is a team effort driven by an established logic of profit maximization and economic transformation — which means displacing workers with machines.
To speak of AI in this context as anything else — as a democratizing tool or assistant or research and exploration force multiplier — is to miss the point. These outcomes will be side effects of the process. The scale of financial backing behind AI, and the concentration of development in a handful of extremely well-capitalized companies, ensures that much.
Altman’s story is interesting in and of itself insofar as it offers a dramatic look inside a high-stakes, high-profile world. It reads a bit like Succession and a bit like King Lear — or maybe Hamlet or Macbeth. But readers shouldn’t mistake the struggle surrounding Altman’s place, tenure, and approach for the definitive battle over the direction of AI. Swap Altman for almost anyone else and the fine details of the AI story might change, but it’s unlikely that the narrative arc would. Everyone is in this pickle barrel together.
Absent a structural change driven by the state, or rather, a multilateral effort by several leading states around the world, the development of AI will be reckless and bad for workers and consumers alike. Rather than democratizing economic life and political power, the path of financialized AI will drive further class inequality. Count on it.
The Future of AI Isn’t Written Yet
However deterministic the development of AI may be under the current paradigm, we ought not to take this state of affairs as inevitable. To say that AI will be predominantly used as a tool for economic dominance so long as it’s developed by an unaccountable cabal isn’t to say it must be so. It’s not as if all this was preordained at or around the time of the Big Bang.
Rather, all things being equal, a specific kind of paradigm will tend to yield specific kinds of results. If we want different results, we must insist on a different paradigm. And if we want a different paradigm, we’re going to have to build it ourselves. We can’t leave that work to Silicon Valley.
In the case of AI, an alternative model entails not just state regulation but democratized decision-making around the development and use of the technologies at scale. Their consequences will be structural and long-term, shaping our employment and capacity to make ends meet. They also contain the germs of possibility — the long-shot prospect of facilitating a productive and inclusive economy and democratic political sphere based on moral equality and some measure of material justice.
As AI’s effects pervade more and more of our lives and workplaces, anger is bound to come to the fore. Over the weekend, Altman’s home in San Francisco was attacked with an incendiary device. Channeling popular rage into what anarchists once called “propaganda of the deed,” his would-be attacker becomes a dark mirror of the tech titan he abhors. But neither high-handed executives nor Luddite avengers will fix this. What’s needed is mass politics and public decision-making.
AI concerns all of us. Its future must be determined by us, under the aegis of the state, not by a handful of tech executives in California.