The Fight Against the AI Systems Wrecking Lives
Federal and state governments are now using AI for everything from determining eligibility for welfare benefits to predicting child abuse and criminal activity. Critics say faulty algorithms are ruining people’s lives and leaving victims little recourse.

In Arkansas, former legal aid attorney Kevin De Liban won case after case for people who were denied medical care or other benefits because of artificial intelligence systems.(Andriy Onufriyenko / Getty Images)
As a former legal aid attorney, Kevin De Liban knows President Donald Trump’s plan to double down on artificial intelligence comes with major risks. Over and over, De Liban has seen how automated decisions can ruin people’s lives.
Just before Christmas in 2022, for example, Robert Austin and his daughter were living in his car in El Paso, Texas. As a single father, he had a hard time finding a shelter that would take them both.
He applied for food stamps, temporary aid, and tried to enroll his daughter in Medicaid, so she’d get health insurance from the government. Though they were eligible, his benefits were denied. Austin tried again; this time, the Health and Human Services helpline said the paperwork he’d uploaded had been rejected and he needed to reapply. “They kept asking for the same forms, over and over again,” Austin says. Every time he’d call to try to get to the bottom of things, he ended up “full circle back where [he] began.”
Eventually, Austin turned to lawyers at Texas RioGrande Legal Aid, who learned that Texas’s automated verification system, developed by multinational consulting firm Deloitte, had made extensive and repeated errors, including issuing incorrect notices, wrongful denials, and losing paperwork.
For the next two years, Austin continued to reapply to Texas’s safety-net programs as he bounced in and out of temporary housing, eventually losing his car. While his daughter grew into a busy toddler, he turned to the unreliable kindness of strangers on the street. “I ended up begging people for money so I could give her pull-ups, or child care so I could take a [medical] appointment,” he says.
Though De Liban was not involved with Austin’s case, he has worked with scores of people trapped in similar situations — victims of algorithmic decisions gone wrong. These kinds of systemic harms are already impacting Americans in every phase of their lives, he says. “Our legal mechanisms are totally insufficient to deal with the scale and scope of harms these technologies can cause.”
In Arkansas, De Liban won case after case for people who were denied medical care or other benefits because of artificial intelligence systems. But each victory underscored the deeper problem: the sheer scale of government actions being made by machines mimicking human decision-making, whether through simple code or machine learning, meant that individual legal victories weren’t sufficient.
That’s why De Liban recently started a nonprofit called TechTonic Justice to help people fight back. He’s building resources to help affected communities hold these faceless, impersonal systems accountable, spreading the word about the problem in publications like the Hill and on NPR. The goal is to provide training for lawyers, educate advocates, and help affected people — those denied benefits like health care or social security — participate in policy conversations.
The stakes for his work just got higher. On Trump’s first day back in office, the president removed existing federal safeguards for AI.
After a $1 million donation from OpenAI CEO Sam Altman to Trump’s inaugural fund,the company announced it would create a version of the popular artificial intelligence platform ChatGPT for government agencies — including highly sensitive information like nuclear weapon security. The contract followed the president’s $500 billion commitment to a joint venture by OpenAI, Oracle, and SoftBank to build new data centers.
Elsewhere Elon Musk’s new Department of Government Efficiency is already using artificial intelligence to flag programs at the Centers for Medicare and Medicaid Services and the Department of Education as fraudulent or wasteful, using it as a justification to freeze or claw back payments.
Given large language models’ well-documented habit of making information up, De Liban is worried. “You’ve got to be sure the information you’re acting on is correct, so you don’t make choices that end up harming the public,” he says.
Although nearly half of all federal agencies already use or are planning to use AI, the government’s use of the technology is largely unregulated. Because of the lack of transparency and technical complexity of these systems, De Liban and other advocates say it’s hard to hold the government or their contractors liable for algorithms’ flaws, even when they perpetuate biases or limit access to essential services.
Government officials are concerned about the lack of regulation too. “We worry about all these automated predictions and the way they really do scale inaccuracies or discriminatory outcomes,” said an official at the Consumer Financial Protection Bureau, a government agency tasked with protecting consumers — until Trump recently fired the chief and attempted to dismiss an additional 1,500 employees.
In a recent report, De Liban found that even before the Trump administration’s expansion, artificial intelligence and related technologies influenced basic decisions for almost all 92 million low-income Americans — impacting how people live, work, learn, and care for their families. And those technologies often reach conclusions that harm people.
They “go wrong in the same ways time after time,” he explains. Long before Austin needed help in Texas, for example, Deloitte already knew their eligibility systems were inaccurate; they’d previously failed in Kentucky, Tennessee, New Mexico, Arkansas, and Rhode Island.
“Government exists to help facilitate a common well-being,” De Liban says. “Right now, it’s failing.”
“What Is Really Behind the Curtain?”
Back in 2016, De Liban was working at Legal Aid of Arkansas, providing free legal services to low-income people. That’s when disabled and elderly Arkansans relying on a Medicaid waiver program for home care to help with basic tasks like going to the bathroom or taking medication were abruptly told that their care would be drastically cut back or eliminated. His phone started ringing with people confused about why they were losing their benefits. Many told De Liban “the computer did it.”
It took some digging for him to discover the details: Seeking to cut costs, the state had switched from a primarily paper-based process conducted by nurses to a standardized algorithm for care allocation developed by one of the founders of a coalition called InterRAI, which has licensed its code to health departments in at least twenty-five states.
The algorithm sorted people needing medical care at home into categories, creating calculations that another software vendor used to implement the program. But it ignored important factors, like a history of medical needs. De Liban began keeping a list of the resulting “algorithmic absurdities,” like a client who was marked as not having a foot problem because the foot in question had been amputated.
De Liban spent months responding to desperate calls, gathering evidence for a lawsuit in federal court. It was lonely, taxing work, requiring all-nighters and long weekends. He’d pull up to ranch houses like Shannon Brumley’s, small in a sweep of overgrown farmland. A forty-four-year-old who became quadriplegic after a motorcycle accident, the algorithm had recently cut his care in half. Brumley feared he would be forced into a nursing home, stranding his teenage son.
“Whoever is trying to make these rules,” he told De Liban from his modified wheelchair, “if they could be handicapped for just one week — live my life — they would change their mind.”
As De Liban listened in kitchens and living rooms across the state, the weight of the stories grew. “You’re the difference between this person getting what they need and not,” he said. Looking for more hours in the day, he’d schedule calls from the car until he lost reception on the rural roads. Then he’d turn the volume up on his old Hyundai’s speakers and bump hip-hop.
Growing up in Fremont, California, listening to artists like Tupac Shakur and Public Enemy had helped foster his sense of justice, providing a political education in verse. Rolling down the red gravel roads, the beats were both a reprieve and a reminder. “Cyber warlords are activating abominations,” he’d spit along with Deltron 3030. “Arm a nation with hatred? We ain’t with that.”
Ultimately, De Liban discovered Arkansas’s algorithm wasn’t even working the way it was meant to. The version used by the Center for Information Management, a third-party software vendor, had coding errors that didn’t account for conditions like diabetes or cerebral palsy, denying at least 152 people the care they needed. Under cross-examination, the state admitted they’d missed the error, since they lacked the capacity to even detect the problem.
For years, De Liban says, “The state didn’t have a single person on staff who could explain, even in the broadest terms, how the algorithm worked.”
As a result, close to half of the state’s Medicaid program was negatively affected, according to Legal Aid. Arkansas’s government didn’t measure how recipients were impacted and later said in court that they lost the data used to train the tool.
As Texas resident Brooke Wilson has learned, states’ refusal or inability to prevent these algorithmic errors are now impacting people in the earliest stages of life. Her daughter Harper was born three months premature and still needs occupational and physical therapy for digestive and lung conditions, as well as an aide who regularly comes to their house to help care for her.
In 2023, the Wilsons received a letter saying Harper’s Medicaid would be terminated. The system claimed the family had failed to provide tax information, even though Harper’s coverage is due to her long-term disability and unrelated to the family’s income.
Harper is just one of 1.8 million children in Texas — almost the entire population of Houston — who were removed from the state’s insurance program in the last year. Sixty-eight percent of these children lost coverage not because they were found to be ineligible, but due to administrative issues like the Wilsons’, including waylaid paperwork or deadlines that may have been impossible to meet.
Like Austin, the Wilsons had run into problems with Deloitte’s automated eligibility verification system. A complaint filed with consumer protection regulators at the Federal Trade Commission alleges that “hundreds of thousands of people have been and are being injured by the system’s failure to accurately automate the relevant eligibility rules.”
Reducing these disenrollments to a number obscures the true human toll, says Maureen O’Connell, an attorney at Disability Rights Texas who says she’s “one of the people who answers the call when somebody’s life has been turned upside down because they’ve lost Medicaid.”
While the Wilsons appealed, Harper’s health deteriorated. They got a second notice — this one incorrectly claiming that Harper was over the age of eighteen and somehow no longer had a disability. Like Austin, when Brooke called the state helpline, even the specialists she was transferred to couldn’t explain the decision. “211 is like the Wizard of Oz,” O’Connell says. “What is really behind the curtain?”
In 2023, a whistleblower from the Texas Health and Human Services Commission revealed that a single system error resulted in at least 24,000 children unnecessarily losing insurance. In emails to Texas representative Lloyd Doggett, the tipster disclosed that a coding mistake had also resulted in the state missing out on $100 million in federal funding, while thousands were prevented from accessing critical services during and after pregnancies. These kinds of tech failures, noted the insider, had created an automated system that can’t “effectively identify and preemptively mitigate issues before recipients are adversely impacted.”
After a lawyer helped the Wilsons resolve their coverage this fall, they were surprised to hear from Medicaid over Christmas, demanding evidence, once again, of Harper’s lifelong disability. When they tried to provide it, the system wouldn’t allow the family to electronically upload her lengthy medical records. The Wilsons have had to return to their lawyer and schedule another hearing, while Harper’s care remains uncertain. “If I wasn’t as tenacious as I am, I would have already given up,” Brooke says, “and I think that that’s what they’re hoping for.”
When set loose into a privatized, fragmented system, AI amplifies existing inequalities. Because multiple vendors are often used, “no one at the state has the full picture of how these systems are potentially layering on top of each other,” De Liban explains. Without better oversight, AI’s harms can compound in ways no one fully understands — until it’s too late.
“These Are Children, Not Lab Rats”
The algorithms that aim to streamline public systems often wade into deeply personal and ethically fraught territory, nowhere more starkly than in efforts to prevent child abuse.
In 2012, researchers in New Zealand set out to develop an algorithm to predict a newborn’s risk of being abused. Existing child welfare programs often struggled to identify when interventions were needed, missing cases where children went on to be mistreated. They hoped to find a way to intervene earlier.
The New Zealand Ministry of Social Development allowed the research group Centre for Social Data Analytics access to its database, where the team identified 132 variables they thought could predict mistreatment — including if the mother had been on unemployment, was unmarried, and if the parents had a criminal history. The researchers planned to conduct an observational study, using the tool to screen a group of babies and monitor those found to be high-risk to see if they actually went on to be abused.
In practice, being flagged by such an algorithm could lead to increased scrutiny from child welfare services, potentially launching investigations or interventions like family separations.
Critics warned the resulting model wasn’t accurate and could reinforce biases against Māori families, whose children are disproportionately removed into state care. Amid public pushback, the incoming social development minister Anne Tolley halted the program, writing, “Not on my watch! These are children, not lab rats.”
Meanwhile, officials in Allegheny County, Pennsylvania, hired the same researchers to move forward with a similar program in the United States. Since 2016, Pittsburgh social workers have used this family screening tool to evaluate child welfare reports, analyzing data from health, criminal justice, and welfare records to assign risk scores to children. The algorithm guides welfare workers in deciding which cases require further investigation — probes that can lead to parents losing their legal rights. The following year, Douglas County hired the Centre to replicate the model in Colorado.
As other states started considering similar programs, Anjana Samant, a former senior attorney with the American Civil Liberties Union’s Women’s Rights Project, began looking into the choices made in the algorithms’ design, baseline assumptions she says actually “function as policy decisions.” Her team quickly found that the training data were often biased.
One of the variables considered by the Allegheny Family Screen Tool, for example, is if a parent has used mental or behavioral health services — but it only considers care accessed through public insurance programs, missing higher-income people with private insurance. It’s one of many points that reflect racial and social factors, without exploring cause and effect.
The algorithm’s accuracy is dependent on the veracity of the underlying data itself. In Colorado, the Douglas County welfare predictor relies on records from its error-riddled benefits management system, which is contracted to Deloitte — the same company running Texas’s flawed benefits program. A 2023 audit of the Douglas County Medicaid program found at least one problem in 90 percent of the sampled notices mailed to recipients, such as incorrect disability status or mailing address.
Accuracy in training data may become an even bigger problem as for-profit companies create their own child abuse screening algorithms. In 2016, Illinois selected Eckerd Connects, a privately owned foster services consultant, to run its Rapid Safety Feedback algorithm, which similarly identified what it says are children at high risk of abuse. The contract was granted after Eckerd’s chief external relations officer was appointed to a senior role in the state’s Department of Children and Family Services.
“A company is designing a tool to make recommendations about whether to investigate a family, and then also makes money entering into contracts with the child welfare agency to provide services to families under investigation,” says Samant. Though Eckerd’s program has now been used in seven other states, Illinois ended their contract with the company in 2017 after a series of children died from abuse the program failed to catch.
Some of the criteria considered by these tools result in decisions that would otherwise be illegal, says Robyn Powell, a law professor at Stetson University College of Law. There are a number of variables that essentially identify if a person is disabled, like whether someone is a recipient of special education. More health care visits often translates to a higher risk score, penalizing disabled parents who tend to have higher health care needs. Powell points out “these predictive tools are using disability” as a form of screening, and disproportionately flag people with disabilities, even though that’s illegal.
In 2023, the Justice Department began an investigation into the Allegheny Family Screening Tool after people filed several civil rights complaints about discrimination against those with disabilities. (Powell and her colleagues also found its results had a racial bias.) Last spring, the federal Department of Health and Human Services explicitly stated that its ban on discrimination against disability includes “through a recipient’s use of algorithms, automated decision-making, and artificial intelligence.” The problem, Powell says, is that existing laws are not being enforced.
Meanwhile, these tools are eroding privacy for everyone. There’s often no way to prevent your information from being swept into these kinds of algorithms, Samant explains. Currently there’s nothing stopping companies from mining hospital and medical records and using what they find — like if someone on Medicaid went to see a therapist — to flag people as potential child abusers.
Similarly, in the private market, public databases are training artificial intelligence programs used to screen tenants, determine mortgage rates, monitor workplace performance, and conduct police surveillance.
As De Liban dug into these kinds of screening tools, he learned that much of this decision-making is invisible. It’s hard to know when automated systems are used to evaluate or to collect information, or what data those programs draw from. “People often aren’t aware that’s the reason they’re getting denied benefits,” De Liban says. “That’s really a feature of AI, not a bug.” Nor is there always meaningful recourse: Algorithms like the family screening tool rely on criteria families cannot change, offering people no way to escape their pasts. And often, there are no real alternatives to find a place to live or to go to a doctor.
“We’re just starting to see the way that all these harms are going to intersect across the course of somebody’s life,” De Liban says. “I don’t think people really understand how extensive these AI systems are.”
“Significant Surveillance Apparatus”
Legal victories don’t always prevent these hidden systems from reshaping lives. That’s something De Liban learned in 2016, as he nervously approached the stand in court for the Eastern District of Arkansas, his hair freshly combed. It was his first federal trial. Over the next three days, he argued that the Arkansas Department of Human Services had concealed the home-based care eligibility algorithm from public oversight.
The judge agreed, ruling that the state couldn’t use the algorithm until it could explain its decisions. The agency refused to make changes that would address its harms, so the following year, De Liban sued again. He won a second time, the judge reiterating that Arkansas had failed to adequately notify the public.
Then in 2019, Arkansas launched a new eligibility system relying on a different algorithm — this one immediately determining that 30 percent of the people enrolled in the state’s program were ineligible to receive care at home. The Groundhog Day nature of the argument felt too familiar. “The incentives never align for anybody to do right by poor people,” De Liban explains, frustrated.
He filed yet another federal lawsuit for three people whose benefits were cut, making it difficult for them to perform basic tasks like using the bathroom. When someone in this situation successfully sues the state, they typically don’t receive a settlement for their suffering or the money or care they’ve lost. This means “the state doesn’t really have a financial incentive to get it right,” De Liban says. Nor can individual state officials who act illegally usually be held personally responsible.
By arguing that his clients had a right to adequate notice and that the state defendants participated directly in the policy change, De Liban broke this qualified immunity. The state eventually settled with his clients in 2023.
The experience left him determined to work on larger regulatory remedies. “There’s only so much you can do on an individual level,” he says. “At a certain point, we have to really contend with how we develop transformative legal and policy solutions.”
De Liban’s now building a coalition between tech experts and civil rights and anti-poverty advocates. Eventually, he aims to “build long-term power among the people and communities that AI is leaving behind.” His first step was to quantify how many people have already been affected by AI, unsettling research that finds close to a third of the country has had an aspect of their life decided by algorithms, often impacting an important government service.
That number is likely to grow. While President Joe Biden issued an executive order directing federal agencies to develop guidelines for their use of AI in 2021, Donald Trump reversed the order on his first day in office. On the campaign trail, he’d claimed it “hinders AI innovation and imposes radical left-wing ideas on the development of this technology.”
De Liban, like many experts, says that Biden’s executive order, which excluded certain types of automation, didn’t go far enough anyway. “Federal agencies have shown no willingness to take that level of oversight,” he says. Now, he anticipates government agencies like the Department of Homeland Security and US Immigration and Customs Enforcement will rapidly expand their use of AI.
“They’re talking about mass deportation,” De Liban notes, which will likely require “significant surveillance apparatus to identify, track, and process people.”
Cyber Hall Monitors
In schools across the country, algorithms are already enabling widespread surveillance, watching over children as they grow — and just like child welfare tools, using that data to make predictions about their futures.
Several months after nineteen students died in a 2022 elementary school shooting in Uvalde, Texas, administrators in the Atlanta Public School district signed a contract with surveillance company Fusus. The company sells a cloud-based camera platform using AI image detection to create a real-time crime monitoring network.
The multiyear contract cost the school district $125,000 in its first year. As cameras were added to school buildings across Atlanta, the company also acquired access to vast amounts of employee and student data, including educational records, addresses, phone numbers, emails, discipline records, test results, homeless or foster care information, juvenile and criminal records, medical records, biometric information, disabilities, socioeconomic information, food purchases, political and religious information, text messages, internet search activity, and geolocation information.
In many cases, this kind of personal information would otherwise require a warrant. It’s not clear how this data will be used, although the company could potentially sell it or use it to train new AI products.
Fusus is just one of many companies across the country now profiting from this kind of monitoring — sometimes watching students even when they’re at home. “Tech vendors have been very aggressive at marketing these technologies to schools,” says Clarence Okoh, a senior attorney at the Georgetown Law Center on Privacy and Technology. For example, ads for Gaggle, a company that monitors students’ online activity, warn that schools without online safety policies may lose out on discounted internet rates. The result, he says, has “wreaked havoc.”
A 2023 survey found that 38 percent of teachers said their school shares sensitive student data with law enforcement, while 36 percent of schools used predictive analytics to identify children who might commit future criminal behavior.
One of the most invasive examples is in Pasco, Florida, where the sheriff’s office built an algorithm to predict which students would commit future crimes. Echoing the plot of the movie Minority Report, the tool identified variables it claimed could spot students at high risk of engaging in criminal activity. Children racked up points if they had been subjected to abuse, if they were a victim of a crime, or if their parents were divorced or had a record with law enforcement. In other words, “it criminalized relationships with your family,” Okoh says.
The algorithm generated a list of over 18,000 students, some of whom were then singled out for “prolific offender checks” at home. Law enforcement would repeatedly show up during the middle of the night, issuing citations for things like grass that was too long or unvaccinated pets. As a member of a community coalition working to end the predictive program, Okoh notes that documents revealed in a lawsuit showed one former deputy said the explicit goal was to “make their lives miserable until they move or sue.”
Much of this industry is driven by federal funding. Pasco’s predictive program, for example, was funded through a federal grant from the Stop School Violence Act of 2018. In New Jersey, Newark Public Schools plan to install seven thousand AI-equipped cameras before the end of this school year, thanks to federal pandemic-relief money. The fiber-optics company working with the school district explicitly advertises its “proactive approach to security” to school districts with American Rescue Plan funding.
The new setup will network with smart sensors the district already uses, devices often set up in bathrooms that monitor air quality to prevent activities like vaping. Less obvious are the algorithms using data from microphones installed in schools in New Jersey, designed to track students’ conversations and identify aggressive behavior, which then triggers automatic alerts sent to law enforcement.
This kind of “affect” recognition is unproven, Okoh notes, comparing it to the racist and scientifically invalidated theory of phrenology. In numerous instances, AI predictions have sent police to students’ homes, in what Okoh calls a “negative feedback loop.”
In 2022, senators Elizabeth Warren (D-MA) and Ed Markey (D-MA) released a report finding that companies deploying these student tracking systems have not taken any steps to determine if education technology disproportionately targets students from marginalized groups.
“It’s dystopian,” Okoh says, particularly during a time when classroom subjects have become divisive and politicized.
It’s not hard to imagine the possibilities for censorship or escalation. The Department of Homeland Security, for instance, is opening an AI Office and has allocated millions of its 2025 budget to the technology. The agency already uses algorithms to generate scores for immigrants that help decide whether people will be released or surveilled. “Once you build this infrastructure, the possibilities are limitless,” Okoh says.
“An Instrument of Righteous Fury”
Before artificial intelligence was regularly monitoring students’ reading material, back when De Liban was in fourth grade, he picked up the Autobiography of Malcolm X. His grandparents immigrated from Europe to the United States after World War II, and they passed down a certain sympathy for the underdog. They cared, he says, that “people have a fair shake.”
But when he graduated college and started applying to law school, De Liban found he couldn’t write an honest essay about why he wanted to be a lawyer. Instead, after his father died unexpectedly, he moved to Bellingham, Washington, where he worked as an outreach worker to people experiencing homelessness. Much of his job was trying to help people access social services, work that eventually led him to become a legal aid attorney.
The cases he’s seen since provide an inexorable record of how algorithmic decisions harm people, from childhood to retirement. The Social Security Administration, for example, provides supplemental income for disabled people who don’t qualify for other kinds of benefits. There are complex qualification rules, and the agency has begun using an algorithm to review property and bank records to ensure people aren’t above strict asset limits. De Liban’s clients found that these computer systems were accusing people of owning property actually owned by someone with a similar name — cutting off their benefits immediately.
Addressing the problem is “difficult because people have to prove they don’t own something the computer says they own,” he explains. One case took weeks to resolve, all while his client was threatened with eviction and didn’t have enough to eat. Though De Liban eventually won that client’s benefits back, Social Security did not fix its systems.
A 2021 report by a nonprofit focused on senior poverty, Justice in Aging, and the consumer issues organization National Consumer Law Center found that after the Social Security Administration started using a private database designed by data analytics company LexisNexis, this confusion became a common problem nationwide. According to the report, the company appears to be trying to avoid the Fair Credit Reporting Act — a federal law that gives consumers the right to dispute inaccurate information — by adding a disclaimer to its website saying it’s not a “consumer report.”
But according to an official at the Consumer Financial Protection Bureau, companies like LexisNexis are required to explain how such decisions are made, including why an algorithm made a particular decision. “There’s no exception,” said the official, who asked not to be named per agency policy, on November 6. “The [Fair Credit Reporting Act] law applies regardless of technology that firms use.”
Under Joe Biden, the consumer protection agency had hired experts with a background in AI and data science into their law enforcement team. Separately, the Department of Justice also issued guidance warning employers that existing discrimination laws apply to the use of AI during tenant screening or housing advertisements, hiring practices, and patient care decisions.
But now, those efforts might be moot.
On February 1, Trump fired the head of the Consumer Financial Protection Bureau, and officials quickly ordered the agency’s staff to stop working. For the agency’s replacement chief, Trump nominated Jonathan McKernan, a former regulator who green-lit several bank megamergers. Meanwhile, Sen. Tim Scott (R-SC), chair of the Senate committee in charge of the Consumer Financial Protection Bureau, is backed by industries like cryptocurrency and financial technology that are doubling down on AI.
These policy shifts are why De Liban says building the partnerships and tools to fight back is more crucial now than ever — even when it comes at a personal cost.
Though he doesn’t like talking about himself, for years, De Liban’s advocacy kept him from having much of a life. “There just was not much leftover energy or emotion,” he says. But in 2017, as he was helping people with their Medicaid cuts, he met a woman working on similar issues for the National Health Law Program.
They got married during the pandemic, and now have a little boy. Parenthood has made his work feel heavier, more immediate. “What I do affects, at least in a small way, what kind of world he’s going to grow up in,” he says. It’s hard to find the balance between being the kind of dad and partner he wants to be and the relentless pace of working to create a better world for his son to grow into.
He’s found release from these tensions in writing his own songs, seeing both his original hip-hop and his legal work as grounded in a shared message. “I’m an instrument of righteous fury, wrestle shadows, fight the worry. See true so clear it’s slightly blurry,” he raps. “I’ll keep pushing till it breaks me first.”
The act of imagination needed to create — a child, or a vision for a world that treats people fairly — has transformed how he thinks about justice into something that cuts a little closer to the bone.
His clients bring the raw materials, he says. “They bring their courage: they’re saying this isn’t fair, I want it to be different.” Often they’re ignored.
Joining forces to fight and pushing people to pay attention sometimes helps make things right. But even when they lose, De Liban says one element of justice is feeling “like somebody, for the first time, has heard you.” It’s something an algorithm can never do.