Kenyan Courts Keep Telling Meta to Let Workers Unionize

Last year Kenyan courts ruled that Meta, Facebook’s parent company, broke the law by firing workers who had attempted to unionize. Meta responded by refusing to pay its employees for over eight months and fighting labor law.

Kenyan lawyer Mercy Mutemi (seated fourth from right), along with fellow counsel, follow proceedings during a virtual pretrial consultation with a judge and Meta’s legal counsel, Nairobi, Kenya, April 12, 2023. (Tony Karumba / AFP via Getty Images)

Last year Kenyan courts filed three lawsuits against Meta, Facebook’s parent company, for its unwillingness to work with organized labor.

Workers sued Meta, which used third-party companies to facilitate content moderation in Kenya, for failing to provide adequate pay, training, or health care for employees who are regularly required to watch images of rape, murder and torture.

In violation of Kenyan law, subcontractors working for Meta fired workers who attempted to unionize last year. When these cases went to court, the company struck back, alleging that third-party firms like Sama, a Kenyan AI firm, were the ones responsible for the labor law violations. Kenyan courts disagreed and ruled that they had the power “to enforce alleged violation of human rights and fundamental freedoms” by Meta and whatever companies it had subcontracted.

By the end of last year, 184 content moderators had sued Meta and its contractors, claiming that the companies fired them as a direct response to their attempts to unionize their workplaces.

Talks between Meta, Sama, and their employees stalled in October despite pressure from courts. During this period, workers have been left unpaid for eight months and have been unable to afford food and rent despite still being legally employed by Sama and Meta. Many of them have had to rely on an online crowdfunding campaign to survive.

Mercy Mutemi, a lawyer representing the workers, told reporters that Meta had kept its employees waiting until December last year for an offer that she described as “a very small amount that cannot even take care of the petitioners’ mental health.”

The prosecution’s lawyers filed a motion to hold Meta in contempt as it had breached a court order requiring it to pay the wages of hundreds of its content moderators. But in a turn of events, Kenyan courts sided with the tech firm, stating it did not “deliberately and contemptuously” breach court orders.

According to Foxglove Legal, a UK-based tech justice nonprofit, Meta has responded to these complaints by discreetly contracting its moderation to another company.

Martha Dark, founder and director of Foxglove, was undeterred when asked about Meta’s attempts to subvert the law in Kenya. “We remain confident of our case overall, as we have prevailed on every substantive point so far,” she said to Jacobin. “The most important ruling remains the one we won in June: Meta can no longer hide behind outsourcers to excuse the exploitation and abuse of its content moderators.”

“A Broader Phenomenon”

A very similar case is now taking shape in Spain, where Meta’s content moderation subcontracting plan faces legal constraints. According to a report in El Periódico, suits have been brought against Meta’s local subcontractor, CCC Barcelona Digital Services.

The plaintiff, a twenty-six-year-old Brazilian, has claimed they have “been receiving psychiatric treatment for five years owing to exposure to extreme and violent content on Facebook and Instagram, such as murders, suicides, terrorism and torture.”

Last October, while Meta continued to ask Kenyan courts for extensions, the Barcelona-based newspaper La Vanguardia reported that around 20 percent of CCC Barcelona Digital Services’ staff were not working “as a result of psychological trauma from reviewing toxic content.”

Meta hopes AI tools will automate these processes, using software rather than humans to scrub content for red flags and violations. And while the company has rolled out some of these programs, their use has led to algorithmic overcorrecting that has done away with valuable information.

Last December, Meta’s independent oversight board found that it “had overcorrected when it lowered its threshold for automated tools to remove potentially rule-breaking content following the attack on Israel by Hamas on October 7.”

“While reducing the risk of harmful content, it also increased the likelihood of mistakenly removing valuable, non-violating content from its platforms,” said the board, citing posts that “inform the world about human suffering on both sides of the conflict”.

“Backbreaking Work”

“Making sure that what is posted online is not harmful to people who see it is a nightmare of a job,” wrote Nathan Nkunzi, the chairperson of the organizing committee of the African Content Moderators Union.

Nkunzi was working as a content moderator for Meta in Nairobi when he fell victim to the company’s mass layoffs in March last year.

“It was the job of me and my colleagues to make sure that all user-generated content on a platform does not pose harm to users. We sifted through all of the words, graphics, images and videos in real time as they were posted onto Facebook, hunting for obscene, illegal, inappropriate, or harmful material,” he explained:

It is time-consuming and backbreaking work, requiring many irregular hours. Moderators must work thoroughly to catch violations hidden within subtle nuance.

We had to see the violence, sexually explicit material, pornography, blood and gore first before everyone else, then make a judgment call on whether or not it ought to go up. We had the responsibility of standing between this torrent of horror and you and your loved ones.

Nkunzi and his colleagues are simply demanding rights offered to Meta workers based in larger and more lucrative markets. The company has in other regions acknowledged that conditions in its content-moderation sector are very difficult for workers. In 2020, Meta, then Facebook, agreed to pay $52 million to compensate workers for post-traumatic stress disorder they had developed on the job, a handout that amounted to a minimum of $1,000 per employee.

AI is simply not good enough yet to competently perform the task of content moderation. Until technology develops, human workers, often poorly paid and in developing countries, are likely to be relied on by Meta to moderate our vast networks of communication and information sharing.

Developments in Kenya show that the tech industry’s exploitative practices can be put to a halt by a state willing to enforce the law and organized labor ready to fight for its rights. William Ruto’s government, which is desperate to cozy up to the tech industry, could be the biggest obstacle to workers in their struggle for dignity and adequate compensation for their labor. This latest campaign could force Kenya’s president, who has presented himself as on the side of ordinary citizens hustling to make a living, to live up to ideals on which he campaigned.