AI Is Being Used to Attack the Working Poor
In Australia, automated decision-making technologies have extorted half a million welfare recipients. Despite government recriminations, the use of artificial intelligence to harass workers is only gaining ground.
Australian business and government have joined the global chorus warning about the risks artificial intelligence (AI) poses to humanity. But despite their fretful tone, the introduction of algorithms into Australian political life has been less apocalyptic and more business-as-usual. Changes to the Australian welfare system are a prime example. Punitive and difficult-to-access by design, the system is now best known for a disastrous algorithmic innovation nicknamed robodebt.
Robodebt was designed by the Liberal Party and high-level Australian public servants to ostensibly catch “welfare cheats.” It ran from 2016 until it was declared illegal in 2019. A royal commission into the now-infamous scheme handed down its findings last week. They were, unsurprisingly, scathing. The scheme relied on a rigid and flawed algorithm that erroneously issued threatening debt repayment notices to around half a million Australians. Over its short but catastrophic life span, robodebt resulted in deaths, suicides, impoverishment, housing stress, and mass trauma.
The fallout from the scheme and the commission’s predictable findings is ongoing. But the whole sordid saga has raised many questions. These relate not only to the corrupt and cruel behavior of politicians and the public service, but also to the uses and abuses of algorithms and AI in Australians’ working lives.
The Labor government, relishing the ignominy heaped upon its predecessor, has issued a slew of reports and discussion papers on the “safe and responsible” use of AI in Australia. But these reports, quite naturally, avoid discussion of what technological advances in capitalism are generally designed to do: prolong the working day, intensify the labor process for workers, and maximize profits for big business.
“An Ice Cube’s Chance in Hell…”
When it was launched in 2016, the robodebt scheme came under instant fire for its flawed premise and formulation. The algorithm measured tax return information, which is calculated annually, against social security payments, which are paid fortnightly. If the averaged annual income did not match the amount reported fortnightly by a social security payment recipient, the algorithm automatically declared the discrepancy a debt and issued a menacing repayment order. Then human services minister Alan Tudge appeared on television to echo the threat, warning “we’ll find you, we’ll track you down and you will have to repay those debts and you may end up in prison.”
Before the scheme had even launched, lawyers at the Department of Social Services warned that all this was probably illegal. Repeated external legal advice over the next few years confirmed this assessment. But public servants admitted in the royal commission that these concerns were discarded because the scheme had political backing from then prime minister Scott Morrison. At the time, Morrison had tied his political fortunes to the demonization of welfare recipients and refugees. He argued that “just like they won’t cop people coming on boats, they are not going to cop people who are going to rort that system. So there does need to be a strong welfare cop on the beat.”
As critics instantly pointed out, robodebt made absolutely no sense: social security legislation had been repeatedly amended to force welfare recipients into casual and part-time work. Most didn’t earn a regular salary each fortnight, but the unreliable, piecemeal wages that the legislation cajoled them to earn.
In the nightmare scenario that ensued, the algorithm’s assessments were not just incorrect, they were more or less incontestable. Once the algorithm had declared you guilty, the only way to prove your innocence was to produce pay slips for the period in question. But if you weren’t actually working at that time — a reasonable prospect for someone who has turned to the social safety net — this was literally impossible.
Robodebt did not technically involve AI, but a clunky algorithm known as an automated decision-making (ADM) system. This was not for lack of trying. After the scheme launched to howls of derision, the Liberal government desperately tried to get the government agency responsible for scientific research to invent an AI up to the task of hunting down the working poor. They were told it was not possible.
If the robodebt algorithm played the role of judge and jury, private debt collection companies kindly stepped in as executioner. Some were owned by huge venture capital firms with ties to the Liberal Party. They were paid commissions — essentially bounties — to extort as much money as they could as quickly as possible from the algorithm’s innocent victims. Corrupt megaconsultancy PricewaterhouseCoopers (PwC) also put its snout in the trough. Incorrectly confident that robodebt would guarantee it some profit for years, PwC nevertheless received $1 million to produce a report about the scheme’s flaws. The report never materialized — PwC was asked to disappear it with “a nod and a wink.”
Overall, robodebt aimed to “recoup” a largely imaginary AUD$2 billion. It cost $606 million to administer up until 2019 and managed to extort $785 million from innocent people. Once the scheme’s illegality was officially declared, this was all then “paid back” under a $1.8 billion settlement. Only a small fraction of this, however, went to the actual victims.
So, apart from costing lives and livelihoods, robodebt cost the Australian taxpayer dearly. It’s worth noting that the paltry amount the scheme initially promised to recoup from the working poor pales in comparison to the tens of billions of dollars Australia loses every year through pro-rich scams like tax havens and negative gearing.
Between Equal Rights, Hal Decides
The robodebt scheme highlights the insidious way new technology has been harnessed to punish welfare recipients. But welfare recipients aren’t the only section of society to come under its sway.
A report from global law firm Herbert Smith Freehills on the monitoring of employees found that over 90 percent of Australian bosses are using digital tools to police worker productivity. As one of the firm’s Australian partners winced,
dystopian is not the right word but there is this sort of omnipresence that people’s movements are being monitored, in a way that I think as Australians we haven’t previously been used to.
This looks different in different contexts. In the insecure gig economy, arbitrary algorithmic decisions enforce 50 percent pay cuts. In the low-paid and highly monopolistic warehousing industry, Australian workers wear algorithm-powered headsets that enforce backbreaking pick rates. Workers at Coles and Amazon report that these algorithms function as unsympathetic overseers policing them down to the second. Even white-collar employees at companies like PwC are being harassed by AI to account for toilet breaks.
As a recent report from the Australia Institute showed, there are very few areas of the economy that are not being affected by the intrusion of algorithms and AI. Companies try to defend these tools as inspired by health, security, or environmental concerns. The obvious truth is that whether it’s in gig, manual, or corporate workplaces, they are designed solely to intensify the working day and squeeze every last cent of possible profit out of the workforce.
This increased scrutiny and intensity contrasts markedly with the treatment awarded to Australian bosses. In March, the Parliamentary Joint Committee on Corporations and Financial Services found that the corporate regulator Australian Securities and Investments Commission (ASIC) was employing an automated digital tool to sift through criminal complaints about company directors. The AI only referred 3 percent of complaints to a higher, human level, despite the vast majority including allegations of criminal behavior. In marked contrast to robodebt, the ASIC algorithm is literally programmed to presume innocence on the part of tens of thousands of dodgy bosses.
Do Algorithms Make History?
There has been a lot of scaremongering about AI from big business in recent months. But if bosses are doing so well from technological advances, why all the fussing?
A quick glance at the Labor government’s recent reports on AI reveals something illuminating. They might be put in the context of warnings about AI-driven apocalypse, but they largely express concern about Australian companies’ ability to gain market share of emerging sectors that are already dominated by US companies. If the US tech barons’ recent calls for greater regulation are in fact a call for a greater shift of power to themselves, Australian business monopolists’ whining is more about foreign monopolies.
The Labor government has compared robodebt to the punitive poorhouses of the nineteenth century. Australia’s Amazon and supermarket warehouses, with their digital foremen, have been likened to sweatshops of the same era. Despite their futuristic hype, recent technological advances are reproducing — in a new and twisted way — the enduring logic of our wheezing mode of production: monopoly, with fewer and fewer opportunities for expansion, and exploitation, at an ever-increasing intensity.