It seems as though the world may never get to see the promised cage fight between Elon Musk and Mark Zuckerberg, but we can still make sure that both billionaires are big losers by restructuring Section 230. To refresh people’s memories, Section 230 refers to a provision of the 1996 Communications Decency Act, which protects internet hosts from liability for third-party content. This means that Facebook and Twitter/X cannot be sued for carrying defamatory material posted by their users.
This contrasts with the treatment of print or broadcast media. If the New York Times prints a libelous letter or runs a libelous ad, it is not just the person who produced the material who can be sued — the paper is potentially liable. Similarly, if CNN carries a defamatory ad or features people who make outlandish and defamatory statements, it can be sued for defamation. In fact, much of Dominion Voting System’s successful defamation case against Fox News was based on statements from guests on its shows, not from the company’s paid reporters.
The logic of holding a print or broadcast outlet liable for third-party content is that, by wholesaling a defamatory claim to a large audience, they are amplifying the problem for the person or entity being defamed. A person yelling on a street corner that a particular restaurant sells rotten meat and gave them food poisoning is not likely a big problem for the restaurant. Yet if that assertion is printed in the New York Times or carried over CNN, it can be a serious problem.
These outlets also stand to profit from carrying defamatory material from third parties. This is obviously true in the case of a paid ad, but they stand to profit even when they are not being directly paid. Presumably, they carry the material because they think it will be of interest to their audience. Getting a larger audience means more money for paid ads.
Since the actions of print and broadcast media turn a trivial issue into a real problem for someone being defamed, and they stand to profit from carrying defamatory material, it is reasonable to hold them responsible for the harm caused. This gives them a real incentive to avoid carrying defamatory material and make quick and effective corrections if they do.
However, Section 230 means that internet hosts have no need to worry about carrying defamatory material. If Twitter users want to spread outlandish lies about a person, company, or organization, Twitter has no responsibility to remove the defamatory material. Twitter, Facebook, and any other social media company can even accept paid ads that carry defamatory material.
Whoever took out the ad can be sued, but not the social media company. Adding a twist to this issue, many people post items anonymously. For example, one prominent right-wing presence on Twitter posts under the name Catturd. If Catturd defames someone, the victim may not even be able to find out the identity of the person they want to sue, and Twitter gets to wash its hands completely.
The rationale for exempting internet hosts from liability for defamation is that it would be impossible for them to screen the vast number of posts made by their users. In the case of the largest sites, the number can be hundreds of millions or even billions a day.
However, this fundamentally misrepresents the issue. The law does not have to require internet hosts to screen material in advance. Instead, it can require that they remove defamatory material in response to the person claiming they were defamed.
The Digital Millennium Copyright Act Model
This sort of removal requirement will impose costs on internet hosts, but it is a manageable task. We know that fact because they are already required to take down material involving copyright infringement in response to takedown notices. This is due to the Digital Millennium Copyright Act (DCMA), which adjusted copyright rules to apply to the internet.
Notices for defamation claims would be along the same lines as the takedown provision in the DCMA. A person claiming defamation would have to clearly identify the defamatory material and what specific content makes it defamatory. The host could then decide to remove the material and avoid the risk of a lawsuit or decide that the claim is weak and leave it up.
There are many complaints about the DCMA. It does impose a cost on internet hosts. And there is evidence of over-removal, as sites remove material in response to a takedown notice even when the complaint of infringement may be weak.
There could be similar problems if Section 230 was modified to hold hosts responsible for third-party content in the event they failed to take down material after having been notified. However, there is an important difference between infringement suits and defamation suits.
The law on copyright provides for statutory damages, which can be many multiples of actual damages. For example, Spotify compensates artists just over 0.4 cents per stream. This means that if an internet host allowed ten thousand unauthorized downloads, the actual damages would be just over $40.
No one would go through the trouble and expense of filing a lawsuit to collect $40 in damages. However, the law provides for statutory damages, which can run into the thousands of dollars, as well as covering attorney fees.
There are no provisions for statutory damages with defamation suits. This means there is far less incentive to file a lawsuit for defamation than for copyright infringement. It also means that an internet host has far less reason to fear a defamation suit. There may still be issues of over-removal as a host decides to simply remove material rather than determine the truth of a claim, but many frivolous complaints could be easily dismissed.
Exempting Sites That Don’t Have Paid Ads or Sell Personal Information
We can narrow the focus of an amended Section 230 so that it applies to the social media giants but not millions of smaller sites by exempting sites that don’t have paid ads or sell personal information. This would still leave huge sites like Facebook, Twitter, and TikTok on the hook for spreading defamatory material, but the situation for many small sites, and even some large ones, would not change. (Mastodon, a major Twitter competitor, survives on donations.)
Of course, many sites, including some smaller ones, do depend on ad revenue or selling personal information. To take some prominent ones, would a site like Glassdoor be able to operate by subscription? How about Yelp or Airbnb?
There are two possibilities here. I don’t know how many people would subscribe to a site like Yelp or Glassdoor, and the marketing costs would likely be expensive relative to potential subscription revenue.
However, it is plausible that aggregators could bundle a set of sites, as cable services do now with television channels. This would not require internet users to take advantage of, or even know about, every site included in a bundle. Presumably, they would choose from aggregators in the same way that they choose now among cable services, selecting ones that include the sites they care about most.
People will dislike paying for something they used to get free, but this has happened with television. Fifty years ago, almost all television was free. At its peak, in 2016, almost one hundred million households were paying for cable services. There is no basis for assuming that people would be unwilling to pay a monthly fee for access to internet sites that they value, especially when many sites are no longer available for free.
I’ve heard some people insist that this proposal would mean that only the rich would have access to the web. Given how much money even very moderate-income people pay for things like cable, internet access, and smartphones, this sort of assertion is beyond silly.
The other route is that sites could assume the liability, but require some sort of waiver from users as a condition of service. For example, Airbnb hosts may be asked to sign a waiver of their right to sue for defamatory postings, subject to some sort of screening procedure by Airbnb. (I am not sure what their current policy is, but I assume they will not allow a racist, sexist, or otherwise offensive comment to stay on their site.)
Some sites may also stop hosting comments to avoid the problem altogether. For example, newspapers may opt not to let readers comment on pieces posted on the web.
There certainly is no guarantee that every site that now survives based on ad revenue or selling personal information would make enough through a subscription service to survive. However, if our criterion for good policy is that it never results in anyone going out of business, we would not be implementing many policies. The question is whether we would be better off in a world where internet platforms have similar liability to print and broadcast outlets for circulating defamatory material over the current one, where they can profit from this material with impunity.
Downsizing the Social Media Giants
In addition to making liability rules for third-party content for the internet similar to rules for print and broadcast media, structuring a Section 230 repeal in this way can go far toward reducing the importance of the leading social media sites. It is absurd that we have to rely on the whims of billionaires to limit the spread of disinformation and outright lies.
There are plenty of bad actors in print and broadcast media, which is already serious cause for concern. We should be talking about media reform more generally. But even the worst actions of Fox or the New York Post can’t have anywhere near the impact of a decision by Elon Musk or Mark Zuckerberg to tolerate or even promote lies about elections, vaccines, or whatever else.
Many harmful lies may not include any claims that amount to defamation, but they would have much less impact if they were spread on a social media site that is one-tenth the size of the current giants. Modifying Section 230 in a creative way can help to bring about that outcome and make both Elon Musk and Mark Zuckerberg look like they lost a cage fight.