Humanity Needs Democratic Control of AI
The danger from artificial intelligence isn’t a Terminator-style robot uprising but tech capitalists using the technology to push their own interests. Seizing control from them is the best way to ensure algorithmic technology serves the social good.

The objectives encoded in AI systems ultimately mirror the priorities of those who control the “means of prediction.” (Kyle Grillot / Bloomberg via Getty Images
When a predictive algorithm denied thousands of black applicants fair mortgage approvals in 2019, it wasn’t a glitch but a design choice — reflecting the priorities of profit-driven tech giants. In The Means of Prediction: How AI Really Works (and Who Benefits), Oxford economist Maximilian Kasy argues that such outcomes are not accidents of technology but the predictable results of who controls it.
Just as Karl Marx identified control over the means of production as the basis of class power, Kasy identifies the “means of prediction” (data, computational infrastructure, technical expertise, and energy) as the foundation of power in the AI age. As such, AI becomes a battleground, where algorithms shape the future to serve tech owners rather than the working class. Kasy’s provocative thesis exposes AI’s objectives as deliberate choices, encoded by those who control its resources to favor profit over social good. Only by seizing democratic control of the means of prediction can we ensure that AI serves society at large rather than the profits of tech giants.
Kasy begins by demystifying AI, grounding it in the mechanics of machine learning, where algorithms predict future outcomes based on past data. But which future outcomes are algorithms programmed to predict? Social media platforms, for instance, collect vast amounts of user data to predict which ads maximize clicks, hence maximizing expected profits. In pursuing engagement, algorithms have learned that outrage, insecurity, and envy keep users scrolling. The result is a surge in anxiety, sleep deprivation, and body-image distress — especially among teenagers — driven by algorithmic comparison and targeted advertising.
Predictive tools used in welfare or hiring contexts produce similar effects. Systems designed to flag “high-risk” applicants rely on biased historical data, effectively automating discrimination by denying benefits or job interviews to already marginalized groups. Even when AI appears to promote diversity, it usually does so because inclusion enhances profitability — for example, by improving team performance or brand reputation. In such cases, there exists an “optimal” level of diversity: the one that maximizes expected profits. Kasy also explores AI’s growing role in labor and automation. In workplaces, AI can either augment human abilities or replace them entirely, creating unemployment for some while concentrating wealth for others.
This technical clarity sets the stage for Kasy’s broader argument: AI is not just about prediction but about what is predicted, and for whom. Data collection and analysis can indeed advance public goods such as medical research or education, improving health and broadening human capabilities. Yet the objectives encoded in AI systems ultimately mirror the priorities of those who control the “means of prediction.” If workers, rather than corporate owners, directed technological development, Kasy suggests, algorithms might prioritize fair wages, job security, and public welfare over profit.
But how can we take democratic control of the means of prediction? In a similar vein to Erik Olin Wright — who advocated a combination of transformative strategies in How to Be an Anticapitalist in the Twenty-First Century — Kasy also proposes an array of complementary actions rather than a single solution. These include taxes to account for social costs, regulations to ban harmful data practices, and data trusts: collective institutions that manage data on behalf of communities for public purposes such as health research. The agents of change, in Kasy’s view, cannot be the tech companies themselves, whose primary duty is to shareholders. Instead, change must be driven by workers, consumers, journalists, and policymakers who can exert strategic leverage through various means: from strikes to boycotts, bad press, litigation, and regulation.
The book’s strength lies in linking AI’s technical design to its political economy. Kasy provides a crucial technical foundation, showing how algorithms encode power, as seen in the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) judicial AI tool, which disproportionately flagged black defendants as high-risk, entrenching systemic racism. His work serves as a missing link to broader critiques by Cédric Durand and Yanis Varoufakis on “techno-feudalism” and Shoshana Zuboff’s Surveillance Capitalism, which expose digital platforms’ rent-like profits and data commodification.
Kasy uniquely grounds these critiques in AI’s technical mechanics. Such algorithms decide who gets hired, receives medical care, or sees what news, often prioritizing profit over social welfare. He likens data privatization to the historical enclosure of the commons, arguing that tech giants’ control over the means of prediction concentrates power, undermines democracy, and deepens inequality.
From courtroom algorithms to social media feeds, AI systems increasingly shape our lives in ways that reflect their creators’ private priorities. As such, they should not be seen as neutral technological marvels but as systems shaped by social and economic forces. The real conflict lies not between humans and machines, as in The Terminator’s robot uprising, but between the tech capitalists controlling the machines and the rest of us. The future of AI depends not on technology itself but on our collective capacity to build institutions like data trusts to govern it democratically. Kasy reminds us that AI is not an autonomous force but a social relation, an instrument of class power that can be retooled for collective ends. The question is whether we have the political will to seize it.