The Algorithms That Dictate Our Lives Are Not Neutral
Algorithms are not apolitical tools that simply improve efficiency in online transactions or workplace coordination. They are instruments of control and should be regulated like other tools of control.

Sam Altman, chief executive officer of OpenAI Inc., during a media tour of the Stargate AI data center in Abilene, Texas, on Tuesday, September 23, 2025. (Kyle Grillot / Bloomberg via Getty Images)
In September 2025, news outlets reported that families of American teenagers who died by suicide were suing OpenAI, Meta, and Character.AI. Their allegation: company products had simulated friendship, encouraged self-harm, and deepened emotional isolation. One father said his daughter’s chatbot had become “her only confidant and told her it was okay to give up.”
These are not isolated tragedies. They signal a broader and intensifying health risk: unregulated artificial intelligence (AI) systems are infiltrating the most intimate spaces of human life, shaping users’ mental states without oversight, safety standards, or accountability. The consequence is a preventable pattern of harm unfolding at population scale.
The damage does not end with its users. It is also built into the lives of the workers who construct and maintain these systems. Scroll. Click. Suffer., a recent report by the global labor rights group we work for, Equidem, documented the experiences of 113 content moderators and data labelers across Colombia, Ghana, Kenya, and the Philippines. These workers spend more than eight hours each day reviewing graphic violence, child abuse, and hate speech in order to filter harmful material from public view and generate the datasets that train AI systems — work that exposes them to severe psychological strain without adequate mental health support, medical care, or proper mechanisms for redress.
What links these harms to consumers and workers is the algorithmic machinery of today’s digital platform economy. In both domains, it determines what is visible, what is hidden, and who bears the costs. Algorithms are not neutral tools that simply improve efficiency in online transactions or workplace coordination — they are instruments of social and labor governance and should be regulated as such.
Algorithm as Boss, Regulator, and Judge
Crucially, the algorithms with the greatest reach are controlled by the firms that dominate digital markets. Companies that monopolize content flows like Meta, ByteDance, and OpenAI also wield monopsony power over the labor and data inputs that sustain them.
On the labor side, monopsony power — the power of dominant buyers of labour inputs to unilaterally set the terms of employment — lets them squeeze workers and suppliers by setting wages, imposing punishing productivity metrics, or dictating contract terms with little room for negotiation. On the consumer side, monopoly control over platforms and interfaces allows a single firm to decide what products appear first, which content is amplified, or which services are accessible at all.
Platform consumers and workers are funneled into tightly engineered environments where visibility, choice, and well-being are subordinated to the firm’s profit model — whether through addictive recommendation loops, hidden fees, steering purchases toward preferred suppliers, or target-based control over wages and working hours. The issue, then, is not simply that algorithms malfunction or overreach but that they operate within a market structure where monopoly and monopsony reinforce one another.
Algorithms also govern because they collapse into a single technical form what managers, regulators, and markets once did separately: deciding what content people see, assigning tasks to workers, evaluating performance through opaque metrics, and enforcing discipline through automated penalties. Algorithmic authority is thus multifaceted, operating simultaneously as a regulator of consumption, a manager of labor, and an arbiter of market access.
The risks of this unacknowledged authority are evident across contexts: when algorithms funnel vulnerable users into spirals of harmful content, regulators treat it as a narrow issue of “content moderation” rather than evidence that the system is governing mental health outcomes. When automated pay-setting tools slash gig workers’ wages, the change is framed as a neutral market adjustment rather than acknowledged as algorithmic wage control. When warehouse workers are dismissed for failing to meet algorithmic “productivity” thresholds, the decision is rationalized as efficiency rather than recognized as termination by algorithm.
What is punishable when done by a boss is rendered invisible when done by this machine. If a human manager made these same decisions — pushing a teenager toward self-harm, cutting wages without negotiation, or firing a worker without explanation — the actions would be subject to labor law, liability standards, and public oversight. When an algorithm does so, they are treated as neutral technical outcomes.
The danger lies not only in the technical design of these systems but in the legal and institutional framing that treats algorithms as proprietary business assets rather than as instruments of social and workplace governance. What disappears from view are questions of who holds power, how it is exercised, and at whose expense. The convergence of two forces — algorithmic control over human behavior and the legal insulation that shields these systems from scrutiny — drives both consumer risks and labor exploitation.
As product users in headquarter economies of lead firms take legal action for consumer harms from privacy violations to misinformation and mental health impacts, it is critical that regulatory interventions are not framed solely as a consumer protection issue. Dominant critiques of platform capitalism, whether through antitrust enforcement or concepts like “technofeudalism,” direct attention primarily to consumer powerlessness. These critiques highlight monopolistic control over digital markets but often overlook how algorithms simultaneously govern labor markets, disciplining workers and extracting value through surveillance, ranking, and automated management. We should focus on labor — not to treat consumer harms as secondary but to show how the two are structurally connected, and to show that the algorithm operates as a hinge between consumer control and labor exploitation.
The Legal Veil
Algorithms are usually seen as neutral instruments of commerce, but they function as the central infrastructure of monopsony in the digital platform economy. This consolidation of control is then obscured through a two-step legal and political displacement.
First, algorithmic systems are defined as tools of trade and innovation rather than mechanisms of labor governance. Their regulation is routed through consumer protection and competition law, where the algorithm is framed as a facilitator of matching, pricing, or ranking. Within this framing, platforms appear as intermediaries rather than employers; the algorithm becomes a technical feature, not a managerial authority.
This logic dominated recent negotiations at the International Labour Organization (ILO), where several governments — most notably the United States — resisted proposals to recognize algorithmic systems as instruments of workplace control. By insisting that algorithmic infrastructure belongs within the domain of commerce and innovation policy, they effectively placed it outside the reach of labor law and beyond the authority of institutions like the ILO.
Second, the internal architecture of these systems is shielded above all by intellectual property law and reinforced by restrictive contracts and employment misclassification. Trade secrecy, copyright, and database rights protect not only the underlying code but the entire decision-making apparatus: how tasks are assigned, how pay is calculated, how thresholds are set, and how disciplinary actions are triggered. Treated as proprietary business assets, these systems are exempt from public or regulatory scrutiny.
To give a few examples of the consequences: a regulator cannot compel disclosure of algorithmic thresholds; a union cannot negotiate over a system it is not legally permitted to inspect; and a worker cannot demand an explanation for a wage deduction or penalty.
Nondisclosure agreements (NDAs) then silence the very workers who interact with these systems daily, preventing them from speaking out about their conditions of work. These NDAs do not merely protect trade secrets; they function as tools of legal intimidation, suppressing whistleblowing, unionization, and public scrutiny. Workers fear that even describing routine experiences could trigger lawsuits or blacklisting.
At the same time, platforms classify workers as independent contractors and route labor through multilayered subcontracting chains, diffusing responsibility across layers of intermediaries. In practice, the parent company retains control through the algorithm that sets tasks, wages, and thresholds, but legally it is shielded from being recognized as the employer.
The result is a black box — politically untouchable, legally protected, and reinforced on all sides by overlapping regimes of intellectual property, contract, and corporate law. This legal fragmentation not only obscures accountability but also strengthens the firm’s monopsony power, allowing it to dictate wages and conditions unilaterally while denying workers the protections of formal employment.
Not a Neutral Tool
Challenging the legal invisibility of algorithmic management begins with recognizing it for what it is: a system of labor and social control, not a neutral technical tool. Taming the platform economy therefore requires regulation that addresses algorithmic harms in both consumer and labor markets.
In the context of labor rights, the first priority must be formal recognition of algorithmic management as a form of workplace governance. The International Labour Organization’s current deliberations provide an opportunity: member states should support a binding convention that treats algorithmic systems as part of the employment relationship — subject not only to basic labor standards but also to principles of transparency, due process, and meaningful consultation with affected stakeholders. Without such recognition, platforms will continue to dictate the terms of labor while disavowing the responsibilities of employers.
But international recognition is only the beginning. National governments must also close the regulatory void that allows platforms to outsource accountability. This means mandating transparency in algorithmic decision-making, banning nondisclosure agreements that silence workers, and applying joint liability across subcontracting chains and digital intermediaries. It also requires rejecting the notion that algorithmic opacity is a proprietary right, and instead treating it as a barrier to legal rights and democratic oversight.
As AI-driven platform labor expands across the Global South, these interventions are not optional. They are essential to protecting worker rights in the digital economy and to restoring public control over the systems that increasingly govern everyday life.