Big Tech Is Taking Cues From Big Tobacco’s Playbook
Alongside tireless political lobbying, Big Tech has infiltrated the academic institutions studying and often promoting AI — with little regard for the potentially catastrophic downsides.
Artificial intelligence has a lot of potentially huge upsides, but it is also big and scary because it could get out of control and possibly end all human life, according to some scientists. And so naturally the tech companies that stand to make bank off the menace are aping one of the original big and scary industries: Big Tobacco.
That’s the thrust of a recent study flagged for me by Dr Max Tegmark after our fascinating and terrifying Lever Time discussion about his dire AI warnings that have been making headlines across the planet.
Tegmark likens the situation to the plot of Don’t Look Up, in which experts tout the benefits of the incoming comet rather than sounding the alarm about its dangers. The 2021 paper he sent me offers some answers about why: it shows how Big Tech has infiltrated the academic institutions studying and often promoting AI — with little regard for the potentially catastrophic downsides.
The researchers at the University of Toronto and Harvard who spearheaded the study offer a conclusion: “Just as Big Tobacco leveraged its funding and initiatives to identify academics who would be receptive to industry positions and who, in turn, could be used to combat legislation and fight litigation, Big Tech leverages its power and structure in the same way.”
Their proof is in the data. Among the tenure-track research faculties at universities they studied, “58 percent of AI ethics faculty are looking to Big Tech for money” and when “expanding the funding criteria to include graduate funding as well as previous work experience, we note that 97% of faculty with known funding sources (65% total) have received financial compensation by Big Tech.”
The researchers explain what this means:
Big Tech is able to influence what [faculty] work on. This is because, to bring in research funding, faculty will be pressured to modify their work to be more amenable to the views of Big Tech. This influence can occur even without the explicit intention of manipulation, if those applying for awards and those deciding who deserve funding do not share the same underlying views of what ethics is or how it “should be solved.”
If you want some recent proof of this influence, take a look at this Reuters report showing that Google “moved to tighten control over its scientists’ papers by launching a ‘sensitive topics’ review, and in at least three cases requested authors refrain from casting its technology in a negative light.”
Notably, some of Big Tech’s funding of AI research goes specifically to AI ethics experts often quoted throughout the media. Executives at these companies understand the political power of those experts and their research. Quoting a US Senate aide, journalist Rana Foroohar recounts this in Don’t Be Evil:
“It’s about social and intellectual capture, which is actually much more effective both short- and long-term. Google supports researchers working in areas that are complementary to Google business interests and/or adverse to its competitors’ business interests; things like relaxed copyright laws, patent reform, net neutrality, laissez-faire economics, privacy, robots, AI, media ownership. . . . They do this via direct grants to the researchers, funding of their centers and labs, conferences, contributions to civil society groups, and flying them out to Google events.”
In this way, the company not only builds goodwill, but successfully “grooms academic standard-bearers, prominent academics who will drive younger peers in a direction that is more favorable to the company,” says the aide.
Right now, this is all playing out most prominently in the EU, where tech lobbyists are trying to water down AI regulations amid warnings that the technology could pose an existential threat to all life on the planet. Here in the United States, Politico has been touting an AI lobbying “gold rush” in Washington, as companies and the K Street influence machine have dollar signs in their eyes.
These real-world Peter Isherwells don’t want lawmakers to know — or legislate against — the potential dangers, because that might get in the way of what a new Morgan Stanley memo noted: “Cognitive computing creates potential investment opportunities, as companies develop the technology and use it to transform their business.”
As they seek to protect that investment potential, tech execs and their lobbyists will no doubt wield academic research papers from corporate-linked experts to make their case — and in many cases, policy makers might not even know of those links.
That was Big Tobacco’s trick a few decades ago — and now it’s Big Tech’s strategy today.