The TikTok Ban Won’t Help

The TikTok ban is about US tech hegemony, not national security or protecting Americans’ data, which homegrown social media companies make a business of collecting and selling.

A Supreme Court ruling has set the stage for TikTok to potentially go offline as soon as Sunday. (Michael M. Santiago / Getty Images)

After today’s Supreme Court ruling, TikTok is set to be banned in the United States on Sunday following the refusal of its parent Chinese company, ByteDance, to sell the social media app to a US company.

In a digital landscape dominated by social media apps owned and curated by US companies, TikTok is the most successful app to have come out of China. It has over 170 million American users — most of the US population — largely young people, and also a significant number of businesses that use the app to advertise their wares.

Whether the ban will actually occur remains anyone’s guess. Joe Biden has said that his administration doesn’t plan to implement it during its last days in the White House, and Donald Trump, who had originally tried to ban the app during his first term, later vowed to save it (after accumulating some fourteen million followers on the platform).

The court, in its decision to uphold the Protecting Americans from Foreign Adversary Controlled Applications Act, signed by President Joe Biden last spring, stated that national security concerns outweigh the potentially harmful consequences to freedom of speech. The justices were sympathetic to the US government’s argument that, since ByteDance is a Chinese company, there are potentially serious risks that arise from the possibility that it will be required to share data about its American users with the Chinese government. There were also risks, the justices affirmed, that the Chinese government could influence the quality of the content circulating on the app, to the detriment of the interests of American citizens.

But the fact that the government has refused to act in a similarly protective manner in relation to US-owned social media apps is telling. Regulation is desperately needed to protect Americans’ data and protect free speech on social media. If the law was really about data protection or national security, it would set industry-wide standards, but the real motive behind it is to preserve US tech dominance.

A Framework for Global Electronic Dominance

We have all learned a great deal about how social media apps and their algorithms operate over the last few years, particularly since the Cambridge Analytica affair came to light in 2018. Cambridge Analytica was a British political consulting firm that was found to have used data gathered by Facebook to influence voters in the US presidential election of 2016 as well as the Brexit vote in Britain in the same year. An innocuous questionnaire presented to Facebook users surreptitiously gathered personal data from their profiles as well as the profiles of all their Facebook friends, which was then sold to the Trump campaign and the campaign in favor of Brexit by Cambridge Analytica, without the consent of the Facebook users who had been targeted. Some eight-seven million Facebook users were affected.

There’s a reason why Meta, Facebook’s parent company, is worth well over a trillion dollars today. Advertising on the platform has become a high-precision concern that can target users with very particular interests at the exact time when they need the products or services being marketed. And as the Cambridge Analytica affair proved, political preferences are not beyond the reach of those algorithms either. The scandal opened the eyes of lawmakers in Congress to just how effective social media apps can be in influencing the behaviors of their users. Those complex algorithms can be tweaked in such a way as to promote a certain point of view or to suppress another without the users having the slightest clue that their behavior is being manipulated in this way.

Most of the work in propagating views of one kind or another online is done by the users themselves. As we post content, share other people’s posts, and comment on posts that are of interest to us, we provide social media companies with data that they can use to serve us more content or essentially sell to advertisers. By tweaking the algorithms, administrators can control the reach of the desired content to specific groups of users based on their demographic information and online behavior. This is our rudimentary understanding of how things operate in the social media universe. The reality of just how pervasive the control of our behavior by these apps could be is still being uncovered.

It’s already palpable to users of X/Twitter that Elon Musk has been indulging in such tweaking of the algorithms to propagate his own political views on that platform since he purchased it a couple years ago. Musk is now a senior advisor to Donald Trump and is set to play a pivotal role in the incoming administration as colead of DOGE, the Department of Government Efficiency. Thus the lines between private and public control over the content flowing through major social media apps appear blurrier than ever in the United States. The truth is, however, that those lines were never really sharply defined in the first place.

In The Age of Surveillance Capitalism, Shoshana Zuboff describes how Bill Clinton and Al Gore, in their 1997 white paper entitled “A Framework for Global Electronic Commerce,” decided on behalf of all American citizens that democracy would stand down in favor of the private control of information on the internet. Effectively, the keys to the digital information space that was being constructed online were handed over to private corporations, which were then required to make all of that information available to the American surveillance agencies upon request. This gave the NSA, FBI, and the CIA access to our data whenever they needed it.

Zuboff states that, in 1986, only 1 percent of our vital information was stored digitally. By the year 2000, that share had risen to 25 percent, and by 2013, 100 percent of our most vital information was stored digitally, and the intelligence agencies had access to it via the major private corporations, which had been granted permission to gather it and even sell it. Today we live in a world where our mobile phones and our cars and many of the appliances in our homes and our offices gather data about our behavioral patterns, and the innocuous apps that we use store that data make it available to interested parties for a price. That’s how the telemarketers from the banks know when you may be looking for a new loan, and how the insurance company knows when you may be ready to switch your health insurance to a new provider.

It all began when Google realized it had on its hands what Zuboff calls a “behavioral surplus,” harvested data that goes beyond what is required to improve the quality of the company’s services. Then came Facebook, which began to harvest our data in even more intimate detail. These were the first tech companies to have developed “instrumentarian power,” the ability to modify user behavior at scale without any overt coercion. They were using techniques based on subtle cues like nudges and feedback loops and recommendation algorithms. As these companies became more and more successful at coercing users with ads and targeted content, the global ecosystem of the apps grew ever larger. Today advertisers are more likely to place their ads on Facebook than on television or radio, because the internet is king.

What we have today is an entire economic system built on this instrumentarian power. If capitalism is a system built on the production and sale of commodities, our personal data is one of the most sought out. It is mined and refined just like oil, and it has become almost as valuable. The ability to influence behavior at such an enormous scale is coveted by all sorts of third parties, particularly e-commerce businesses and political campaigns. So the US Supreme Court may well have reason to fear that TikTok could grant a powerful few undue influence over the behavior of many American citizens, even if politicians’ claims that TikTok — a private company — is funneling user data to the Chinese government are misguided. If the Chinese wanted the data, they could just buy it. Rather, the Supreme Court has decided that the free speech of American users of TikTok is a small price to pay to protect US tech hegemony, not Americans’ data or privacy.

Profits Over People

This is substantiated by the astonishing lack of government oversight of homegrown apps and tech companies. The Supreme Court obviously has few qualms about the undue power to manipulate the behavior of citizens that US policy has granted to corporations, private players who have no concern for the greater interests of their users beyond their ability to target them with ads and political messaging.

The American Journal of Epidemiology conducted a five-thousand-person study that found that higher social media use correlated with self-reported declines in mental and physical health and life satisfaction. An internal report from Facebook found that 64 percent of the people who joined extremist groups on Facebook did so because the algorithms steered them there. What would it take to limit social media’s antisocial tendencies?

Regulation could compel social media companies to protect our data and our right to privacy, but platforms designed to favor profit maximization over human well-being will always run counter to these goals, whether operated by companies in the United States, China, or elsewhere. The TikTok ban, if it actually happens, shows that government is at least capable of intervening forcefully. But that it is motivated by US economic hegemony, and during a time when tech capitalists and government in the United States have never been more imbricated, indicates that we can’t expect meaningful industry-wide intervention for the many anytime soon.