The Hidden Human Cost of AI Moderation

Training AI often means staring at humanity’s worst atrocities for hours at a time. Workers tasked with this labor endure psychological injury without support — and face legal threats if they speak about it.

Behind every AI model promising efficiency, safety, or innovation are thousands of data labelers and content moderators who train these systems by performing repetitive, often psychologically damaging tasks. (d3sign / Getty Images)

I signed the NDA like everyone else — didn’t think twice at the time. But now it feels like a trap. I’m living with nightmares from the content I saw, but I can’t even talk about it in therapy without fearing I’m violating the NDA.

Content moderator, Colombia

The artificial intelligence boom runs on more than just code and compute power — it depends on a hidden, silenced workforce. Behind every AI model promising efficiency, safety, or innovation are thousands of data labelers and content moderators who train these systems by performing repetitive, often psychologically damaging tasks. Many of these workers are based in the Global South, working eight to twelve hours a day reviewing hundreds — sometimes thousands — of images, videos, or data points, including graphic material involving rape, murder, child abuse, and suicide. They do this without adequate breaks, paid leave, or mental health support — and in some cases, for as little as $2 an hour. Bound by sweeping nondisclosure agreements (NDAs), they are prohibited from sharing their experiences.

The psychological toll is not incidental. It is the predictable outcome of an industry structured around outsourcing, speed, surveillance, and the extraction of invisible labor under extreme conditions — all to fuel the profits of a tiny corporate elite concentrated in the Global North.

As researchers involved in developing Scroll. Click. Suffer., a report by the human rights organization Equidem, we interviewed 113 data labelers and content moderators across Kenya, Ghana, Colombia, and the Philippines. We documented over sixty cases of serious mental health harm — including PTSD, depression, insomnia, anxiety, and suicidal ideation. Some workers reported panic attacks, chronic migraines, and symptoms of sexual trauma directly linked to the graphic content they were required to review — often without access to mental health support and under constant pressure to meet punishing productivity targets.

Yet most are legally barred from speaking out. In Colombia, 75 out of 105 workers we approached declined interviews. In Kenya, it was 68 out of 110. The overwhelming reason: fear of violating the sweeping NDAs they had signed.

NDAs don’t just safeguard proprietary data — they conceal the exploitative conditions that make the AI industry run. These contracts prevent workers from discussing their jobs, even with therapists, family, or union organizers, fostering a pervasive culture of fear and self-censorship. NDAs serve two essential functions in the AI labor regime: they hide abusive practices and shield tech companies from accountability, and they suppress collective resistance by isolating workers and criminalizing solidarity. This enforced silence is no accident — it is strategic and highly profitable. By atomizing a workforce that cannot speak out, tech companies externalize risk, evade scrutiny, and keep wages low.

Originally created to protect trade secrets, today NDAs have become tools of labor repression. They enable dominant tech firms to extract value from traumatized workers while rendering them invisible, disposable, and politically contained. Deployed through layered subcontracting chains, these agreements intensify psychological harm by forcing workers to carry trauma in silence.

To challenge this regime, NDAs must no longer be treated as neutral legal instruments. They are pillars of digital capitalism — technologies of control that must be dismantled if we are to build a just and democratic future of work.

The Hidden Workforce Behind Our Feeds

In today’s AI economy, Big Tech firms exercise what can be described as dual monopsony power. A monopsony is a market condition where a small number of buyers exert outsize control over sellers. First, companies like Meta, OpenAI, and Google dominate the product market: they control the platforms, tools, and data infrastructures that shape our digital lives. Second, they act as powerful buyers in the global data labor supply chain — outsourcing the most grueling and undervalued work, such as content moderation and data annotation, to business process outsourcing (BPO) firms in countries like Kenya, Colombia, and the Philippines.

In these labor markets, where unemployment is high and labor protections are weak, corporations enjoy wide latitude to dictate terms of employment. Lead firms determine task volume and pay rates, effectively setting the margins for BPO firms. These margins in turn determine wages, working hours, and industrial discipline practices designed to hit productivity targets. In this setup, workers have little power to say no. Platforms impose strict performance metrics, algorithmic surveillance, and gag orders — yet maintain legal and reputational distance from the labor conditions they create.

The harm is real and growing. Take the case of Ladi Anzaki Olubunmi, a content moderator reviewing TikTok videos under contract with outsourcing giant Teleperformance. She died after collapsing from apparent exhaustion. Her family says she had complained repeatedly about excessive workloads and fatigue. Yet ByteDance, the parent company of TikTok, has faced no consequences — shielded by the structural buffer of intermediated employment.

This system facilitates what some scholars now describe as technofeudalism: a return to feudal-like relations, not through land ownership, but through control of the digital commons via opaque data infrastructures, proprietary algorithms, and a workforce made invisible through subcontracting and gagged by NDAs. For users, these algorithms determine what content is seen. For workers, they take the form of relentless performance dashboards — a modern-day overseer.

NDAs not only silence these workers but prevent them from raising alarms when algorithmic systems threaten the safety of the digital commons — or when the content they encounter poses real risks to the public. Kenyan data labelers, for instance, described reviewing videos containing subtle yet clear incitement to communal violence — but had no channels for reporting imminent threats.

The NDA has become a modern oath of loyalty — silence at all costs. Platforms may rebrand or rotate in and out of dominance — today it’s Meta and OpenAI, tomorrow it may be others — but the model of labor extraction remains the same: one built on distance, control, and disposability.

A Global Health Crisis by Design

What emerges from this business model — built on outsourcing, suppression, and the commodification of forced psychological endurance — is not a series of isolated workplace injuries. It is a public health crisis, structurally produced by the AI industry’s labor regime. Workers are not just exhausted or demoralized; they are being mentally broken.

In Scroll. Click. Suffer., we heard from content moderators who reported hallucinations, dissociation, numbness, and intrusive flashbacks. “Sometimes I blank out completely; I feel like I’m not in my body,” said a worker in Ghana. Others described losing their appetite, developing chronic migraines, or suffering persistent gastrointestinal issues — classic symptoms of long-term trauma. A Kenyan moderator said she could no longer go on dates, haunted by the sexual violence she was forced to view daily. Another described turning to alcohol just to be able to sleep.

This harm doesn’t remain confined to individuals — it ripples outward into families, relationships, and entire communities. In countries where mental health care infrastructure is severely underresourced, the burden is pushed onto overworked public systems and households. In most of these workplaces, even basic mental health support is absent. Some offer short “wellness breaks,” only to penalize workers later for falling short of productivity targets. As Ephantus Kanyugi, vice president of the Data Labelers Association of Kenya, put it:

Workers come to us visibly shaken — not just by the trauma of the content they’re forced to see, but by the fear etched into them by the NDAs they’ve signed. They’re terrified that even asking for help could cost them their jobs.

This is not incidental distress but an institutionalized form of extraction — emotional strain borne by workers. The AI industry extracts surplus value not only from labor time but also from psychic endurance — until that capacity collapses. Unlike traditional factory work, where injuries can be seen, named, and sometimes collectively resisted, the damage here is internal, isolating, and much harder to contest.

NDAs intensify the crisis. They don’t merely shield companies from legal liability; they sever the very conditions necessary for healing and resistance. By gagging workers, NDAs prevent the formation of collective identity. That political silencing compounds the health crisis: workers are unable to name what is happening to them, let alone organize around it. The result is a class of traumatized, disposable workers who suffer in silence while the system that harms them remains protected — and profitable.

Where Do We Go From Here?

The scale and severity of this crisis demands more than piecemeal reform or individualized coping strategies. It calls for a coordinated, global response grounded in worker power, legal accountability, and cross-movement solidarity. As labor organizers and activists, we must begin by naming what we are up against: not just bad actors or isolated violations but a deliberately engineered system — one that profits from rendering labor invisible, extracts value from trauma, and silences dissent through coercive contracts like NDAs.

The first step is dismantling the mechanisms of silence. NDAs that prevent workers from speaking about their conditions — whether to therapists, families, journalists, or unions — must be banned from labor contracts. Governments and international bodies should recognize these clauses not as standard business practice but as violations of fundamental rights: freedom of expression, access to care, and freedom of association. Where platforms claim these agreements are needed to protect trade secrets, we must ask: At what cost, and to whose benefit?

Second, we must build worker power across borders. Content moderators and data workers are often isolated by design — scattered across subcontractors and countries, bound by legal and technological barriers. But new formations are emerging. In Kenya, the Philippines, and Colombia, workers are sharing testimonies despite threats of retaliation and job loss. These local efforts must be connected through transnational labor alliances that can jointly name employers, demand protections, and fight for shared standards. Tech firms may hide behind outsourcing, but the harm is consistent — and so must be our response.

Third, we need enforceable global standards that treat psychological health as central to decent work. Wellness breaks and hotline numbers are not enough. Platform companies must be held directly accountable for labor conditions across their outsourced chains. That includes legally binding rules for working hours, mandatory trauma support, rest periods, and protections from retaliation. Governments, trade unions, and international labor bodies must insist that companies like Meta, TikTok, and OpenAI cannot be considered global AI leaders while denying fundamental rights to the workers who train their models.

Finally, we must reject the notion that AI regulation is simply about ethics or innovation. This is a labor rights issue — and it must be treated as such. Ethics without enforcement is hollow, and innovation that comes at the cost of human dignity is exploitation by another name. Organizers, researchers, and allies must push for a new narrative: one that measures the intelligence of any system not only by its performance but also by how it treats the people who make it possible.

The Future of AI Is a Mirror of Our Values

The International Labour Conference, the peak decision-making body of the International Labour Organization (ILO), has just completed the first round of standard-setting discussions on decent work on digital labor platforms. With a mandate to develop a binding convention and supporting recommendation, the ILO must ensure that regulatory frameworks apply not only to directly contracted platform workers but also to those hired through intermediaries. These standards must protect the fundamental rights to form or join trade unions and to bargain collectively — including through explicit prohibitions on NDAs that systematically silence workers and undermine collective action.

What does it say about the world we’re building when the most celebrated technology of our time runs on the silent suffering of some of its most precarious workers? While billions are poured into AI and headlines hail its breakthroughs, the very people who make it possible — by absorbing unimaginable violence to train machines — are left voiceless, broken, and discarded. This isn’t progress. It’s calculated blindness.

If we build AI on a foundation of trauma and repression, we are not creating tools for human advancement — we are constructing systems that forget how to care, how to listen, how to be just. And if we don’t fight to change this now, the cost won’t only be borne by the content moderators in Nairobi or the data labelers in Manila. It will be borne by all of us — in the silence we normalize, the harm we conceal, and the future we allow to be built on their pain.