Cory Doctorow Explains Why Big Tech Is Making the Internet Terrible

Cory Doctorow

The internet is increasingly a miserable place to be. As Cory Doctorow explains, Silicon Valley CEOs and grifters are working hard to keep it that way.

Google has never, since the development of its search engine, innovated. (Avishek Das/SOPA Images/LightRocket via Getty Images)

Interview by
David Moscrop

In his new book Red Team Blues, Cory Doctorow explores the world, and grift, of cryptocurrency by way of a techno-thriller. He recently spoke to Jacobin’s David Moscrop about his latest book, the state and future of artificial intelligence, and why Twitter and Google suck so much.


David Moscrop

I want to start with your new book, Red Team Blues. First of all, fantastic title. The catalog copy describes it as “a grabby next-Tuesday thriller about cryptocurrency shenanigans that will awaken you to how the world really works.” You’ve written fiction about proprietary technology, 3D printing, uploading consciousness, surveillance, and lots more. Why cryptocurrency this time?

Cory Doctorow

Well, I think that there’s a mode of what you might call service journalism that can carry over into fiction — and science fiction is really good at it — which takes ideas that are complicated and extremely salient, and that are treated as a black box by readers, and unpacks them, sometimes around a narrative. I think of Margot Robbie doing this in The Big Short when she’s in a bathtub full of bubble bath, explaining how credit default swaps work and collateralized debt obligations.

There’s this kind of performative complexity in a lot of the wickedness in our world — things are made complex so they’ll be hard to understand. The pretense is they’re hard to understand because they’re intrinsically complex. And there’s a term in the finance sector for this, which is “MEGO:” My Eyes Glaze Over. It’s a trick.

You give someone a prospectus so thick that they assume it must be important and good in the same way that when you give someone a pile of shit that’s big enough, they assume there must be a pony under it. And so I wanted to take that complex technical stuff going on with crypto, an arena in which I think there is mostly no “there” there, and do exactly that: show how there’s no “there” there in a story about someone who’s trying to figure out whether there’s a “there” there.

This is not in the catalog copy, but it is super important to me: one of the plot elements in this book, and it’s not a spoiler, it’s in the first chapter, is about how secure enclaves and remote attestation work. And this is super nerdy stuff that I’ve been obsessed with for twenty years. It’s basically a way of designing computers so that I can send you a computational job that your computer then performs and then you send the output back to me and I can know for sure that the job was performed on your computer to my specifications and not tampered with by you.

And there are lots of really interesting applications for this, but it’s also incredibly dystopian, right? Because it means that I can send you a keystroke monitor because I’m your boss and know that you’re not interfering with it. It’s basically a way for the computer to treat its owner or user as an adversary and allow a third party to control them with it. That’s the explicit architectural design. And there are some cryptocurrencies that are based on that. They’re kind of peripheral, but cryptocurrency is the first application for these that isn’t entirely sinister.

It may sound esoteric, but the way your iPhone stops you from installing third-party software is the same model. It’s this kind of remote attestation digital signing, secure enclave business. And as you can tell from just hearing me describe it to you, this is incredibly eye-watering, technical, deep, weird, complicated stuff. I happen to think it’s unbelievably important for the future of technological self-determination. And one of my major projects since 2002 — when the first Microsoft white paper about this came out — has been to try and get people to go like, “Oh wait, this is a big deal.” This is in some ways my latest bite at that apple.

Sleight of Hand and “Enshitification”

David Moscrop

I want to talk a little bit more about the “there” there when it comes to crypto, because I could never figure out what it was. The primary hurdle seems to be getting enough people to believe in it. What’s the play? Is it a “Trust me, we know what we’re doing and this will solve all your problems” play? Is that the play with making it a MEGO?

Cory Doctorow

You know what it is? In many cons, you take something that the mark understands and, from there, you encourage them to infer something logically based on their understanding of the setup. But you do a little sleight of hand in between. The mark assumes that if I’m holding a coin in my hand and then I turn my hand over, that I’m not slipping the coin between my fingers so that it’s in my palm when the back of my hand is showing and vice versa. And because the mark knows something about how coins and hands perform, he or she just operates on the assumption of how the world works. And the sleight of hand here is similar to a scam that, when I was writing for Wired in the ’90s, my editor Bob Parks uncovered.

There was a company that made GPS tracking bracelets to help you find your kids when they’re kidnapped. And their pitch was, “Here is a bracelet that gets your kids’ location using the GPS satellites and then it sends it back to you using the GPS satellites.” And if you know anything about GPS satellites, you went, “Hey wait a minute, how does that work?” Because with GPS satellites, you can receive from them with a small, low-powered device, but to send data to a GPS satellite, even if they were configured to receive it, which they’re not, would require a twenty-five-foot dish. And this was before mobile data networks. This was in the early ’90s.

What these guys were doing was taking people’s cursory, rule of thumb–based understanding of how GPS worked and how satellites worked — that’s kind of like, “Well I guess it works like a walkie-talkie, right? Where you can send and receive” — and they were just scamming people.

A lot of the crypto stuff starts with what a sleight-of-hand artist would do. “Alright, we know that cryptography works and can keep secrets and we know that money is just an agreement among people to treat something as valuable. What if we could use that secrecy when processing payments and in so doing prevent governments from interrupting payments?”

After this setup, the con artist can get the mark to pick his or her poison: “It will stop big government from interfering with the free market” or “It will stop US hegemony from interdicting individuals who are hostile to American interests in other countries and allow them to make transactions” or “It will let you send money to dissident whistleblowers who are being blocked by Visa and American Express.” These are all applications that, depending on the mark’s political views, will affirm the rightness of the endeavor. The mark will think, that is a totally legitimate application.

It starts with a sleight of hand because all the premises that the mark is agreeing with are actually only sort of right. It’s a first approximation of right and there are a lot of devils in the details. And understanding those details requires a pretty sophisticated technical understanding.

David Moscrop

Well, speaking of grifts, let’s talk about Twitter. The site was never a utopian online space, but it was previously at least better than it is now. What’s driving its collapse beyond Elon Musk purchasing it? Is there something better out there or something better to come?

Cory Doctorow

I think we should thank Elon Musk for what he’s doing because I think a lot of the decay of platforms and the abuses that enable that decay is undertaken slowly and with the finest of lines, so that it’s very hard to point at it and say that it’s happening. And Musk, a bit like Donald Trump, instead of moving slowly and with a very fine-tip pencil, he kind of grabs a crayon in his fist and he just scrawls. This can help to bring attention to issues on which it would otherwise be difficult to reach a consensus.

With Musk and with Trump, it’s much easier to identify the pathology at play and do something about it — and actually get people to understand what the struggle’s contours are and to join the struggle. I think in a very weird way, we should be thankful to Musk and Trump for this.

The pathology that I think that Musk is enacting in high speed is something I call “enshitification.” Enshitification is a specific form of monopoly decay that is endemic to digital platforms. And the platform is the canonical form of the digital firm. It’s like a pure rentier intermediary business where the firm has a set of users or buyers and it has a set of business customers or sellers, and it intermediates between them. And it does so in a low competition environment where antitrust law or competition laws are not vigorously enforced.

To the extent that it has access to things like capital, it can leverage its resources to buy potential competitors or use predatory pricing to remove potential competitors from the market. Think about Uber losing forty cents on the dollar for thirteen years to just eliminate yellow cabs and starve public transit investment by making it seem like there’s a viable alternative in rideshare vehicles. And we see predatory pricing and predatory acquisition in many, many, many domains.

Just look at grocery stores in Canada. Loblaws is buying its competitors, engaged in predatory pricing, and abusing both its suppliers and customers to extract monopoly rents and leave everyone worse off. But there’s a thing that happens in the digital world that’s different. Digital platforms have a high-speed flexibility that is not really present in analog businesses.

John D. Rockefeller was doing all this stuff one hundred twenty years ago, but if Rockefeller was like, “I secretly own this train line and I use the fact that it’s the only way to get oil to market to exclude my rivals, and I’m worried that there’s a ferry line coming that will offer an alternate route that will be more efficient,” he can’t just click a mouse and build another train line that offers the service more cheaply until the ferry line goes out of business and then abandon the train line. The non-digital example is capital intensive, and it demands incredibly slow processes. With digital, you can do a thing that I call “twiddling,” which is just changing the business logic really quickly.

Jeff Bezos is a grocer twice over. He runs a company called Amazon Fresh that’s an all-digital grocer and he runs a company called Whole Foods that’s an analog grocer. And if Amazon Fresh wants to gouge on the price of eggs, he just clicks a mouse and the price of eggs changes on the platform; he can even change the price for different customers or at different times of the day. If Whole Foods wants to change the price of eggs, they need teenagers on roller skates with pricing guns. And so, the ability to play the shell game really quickly is curtailed in the analog world.

The digital world does the same things that mediocre sociopath monopolists did in the Gilded Age, but they do it faster and with computers. And in some ways, this contributes to the kind of mythology surrounding the digital world’s Gilded Age equivalents. They can compose themselves as super geniuses because they’re just doing something fast and with computers that makes it look like an amazing magic trick, even though it’s just the same thing, but fast. And the way that this cycle unfolds is you use this twiddling to allocate surpluses — that is, to give goodies to end users so they come into the platform. This is things like loss-leaders and subsidized shipping.

In the case of Facebook or Twitter, it’s “you tell us who you want to hear from and we’ll tell you when they say something new.” That’s a valuable proposition; that’s a cool and interesting technology. And then you want to bring business customers onto the platform. And so, you’ve got to withdraw some surplus from the end users. So, you start spying on end users and using that to make algorithmic recommendations.

Privacy is a classic consumer surplus. It’s a thing that you have that someone else can expropriate from you and use to benefit at your expense. Platforms like Twitter start spying on you and targeting ads to you. And they can allocate surplus to advertisers by saying, “hey, just like we have been historically reliably delivering updates from people that our users follow, we can reliably deliver updates to those users from you based on your targeting criteria.”

They can also bring in publishers: “Hey, we will be the funnel for your website. You just post excerpts or links to your content. We will use our surveillance to displace the things our users have asked us to show” — the updates from their followers — “and replace them with not just ads, but links to your website. And some of those things might be things that users want to see, so they might subscribe to you.”  You, the end user, are now on the receiving end of this funnel.

In the next stage of monopoly decay, the surpluses are withdrawn from users and from business customers to the point where there’s only enough surplus left in the platform to keep them locked in, but not so much that anything is left on the table that could otherwise go to the shareholders. And that’s what we’re seeing Twitter do. How many ads can they cram into your feed? How little can they show you of what you’ve asked to see? How few of the users that have subscribed to your feed can be shown what you update so that you can be charged to boost it or pay for Twitter Blue as a way of reaching your own followers. And that’s just one way to rug pull.

Musk did this open-sourcing of the algorithm and a lot of the rules got a lot of attention. I think the one that was most important didn’t get enough attention at all, which is that links off-platform get down-ranked. So, if you’re Jacobin and you’ve got a post in your feed that reads, “Here’s our new article and here’s a link to it,” that has a lower chance of showing up in your follower’s feeds than if instead you recapitulate that article without a link back to the article. This means that your opportunity to derive revenue from your content is firmly locked to the platform.

One of the things that platforms do when they reach this stage is they start undermining both the revenue that publishers get from advertising — they’ll pay you less of the money that they collect from advertisers to show you content associated with your material — and they also charge advertisers more and deliver it less reliably.

Facebook and Google had an illegal collusive arrangement they called Jedi Blue to raise the price of ads but lower the delivery of ads and lower the share paid out to publishers. And this came out in the Texas attorney general’s lawsuit. That is classic enshitification.

When you look just at what Musk is doing overtly, I’ll bet you anything that he’s doing some of this covertly — that he’s doing things like charging for ads that never get displayed; that he is charging more for ads than would be paid on an honest basis; or not delivering things that he’s promising advertisers and charging them for. I mean, not because he’s especially wicked, but because that’s what Facebook and Google already do. Musk’s problem, I think, is that he’s especially careless. It won’t require the Texas attorney general to depose their executives and do discovery on their internal memos. I think Musk will fat-finger it and post it at two in the morning when he is coming down off fucking DMT or something. That’s how we’ll find out about it with Musk.

And so that’s where we see Twitter going.

The second part of your question, “Is there something better coming?” I think it’s Mastodon; and I think it’s Mastodon for a bunch of reasons. One is that the Mastodon standard was developed when the tech platforms were totally disinterested and didn’t have their fingers on the scale. ActivityPub, which is the standard that governs Mastodon, happened in this moment of reduced scrutiny and interference. The people who made it were ideologically committed to decentralization and technological self-determination, and they made it without interference from large firms that otherwise would’ve found it relatively easy to capture the process.

So, it’s a very good standard and it has a lot of very good characteristics. One of them is that it has the right to exit built in. The standard ensures any user can export not just the list of people they follow, but the list of people who follow them. Users can automatically update all of those people in instances when they quit a server and go to another one.

One of the reasons that people still use Twitter, even though its quality has been manifestly degraded, is because they like the people who follow them and who they follow. It’s of value to them; that plays into Musk’s calculus. He’s trying to find an equilibrium where he’s enshitifying the platform to the point where it is almost useless to you but not totally useless. And one of the things that affects that calculus is what self-help measures you can take. If you can’t leave Twitter, then Musk can do bad things to you and assume that you won’t go because the value of your followers is more than the pain that Musk is inflicting on you. If it’s easier for you to take your followers with you, then the pain he can inflict on you is smaller. And that’s also true of the administrators of these small servers.

The other thing that Mastodon embodies is something called end-to-end, which is that the default posture for Mastodon is connecting willing senders with willing receivers. If the two try to communicate, in other words — if I post something for my followers and you follow me — Mastodon delivers it. The default of Mastodon is that things that are sent by senders are received by receivers — provided that both parties consent to that process. What that means is that the process by which you extort money from media brands — by telling them that they can’t see followers, won’t see their updates — that’s just off the table. And it means that a lot of the privacy-invasive conduct and advertising conduct that contributes to enshitfication, it’s just not tractable in the way that it is on Twitter.

Finally, crafting and enforcing a regulation that keeps things this way is pretty straightforward. A lot of people have said, “Oh, well, there should be a regulation that prevents Twitter from allowing harassment on the platform.” But that would require us to make factual determinations about whether conduct rises to the level of harassment. If you make fun of Elon Musk, is that harassment? If so, it would require that we have some kind of finding of fact to determine whether they could have prevented it or could have done something more; whether they acted in good faith. And all of that might take five years to determine. Meanwhile, you’re still being harassed. Whereas saying you just have to let people leave, and you also have to deliver messages from willing senders to willing receivers, is really easy to administer.

If you quit, say, Mastodon.lol and you complain that the administrator never gave you your data, and you go to the privacy commissioner who handles things like access to user data and you say, “Hey, I want to lodge a complaint because the administrator of Macedon.lol never gave my data.” They don’t have to do any finding of fact. They just send a letter to that administrator saying, “Just send him another copy of the data. I know you say you sent it, and he says he didn’t get it. Just send another copy.” And how do we know if end-to-end is being enforced? I send a message and you tell me whether you received it. And, so, we don’t need to depose Facebook engineers to do end-to-end. We can test it ourselves.

Capitalists for Cooperation

David Moscrop

Speaking of decline, in February you posted a thread on Twitter about the future of internet search engines, and you wrote that Microsoft and Google agree that our online explorations will be further driven to “lengthy florid paragraphs written by a #chatbot who happens to be a habitual liar.”

Cory Doctorow

I should have said habitual confident liar. But yeah.

David Moscrop

What does that look like in practice? If someone sits down in the future and is going to search on Bing, for reasons passing understanding, what are they going to encounter and why?

Cory Doctorow

I think that we need to understand that capitalists hate capitalism. They don’t want to be in an environment in which they have to compete. And there’s a couple of reasons for that. One is just that if there’s no alternative, they can extract more surplus from you without you defecting to a rival’s offer. And so, they really like lock-in and predatory pricing and mergers-to-monopoly. I mean, the term “trust busting” comes from the trusts.

Rockefeller and other barons went to all the competitors who nominally hated each other and said, “Hey, we’re going to start a new company, a holding company, a trust. You guys all sell your companies to the trust. We’ll give you shares in the trust equal to the valuation of your company relative to the overall pie. And then I’ll run the trust.” These companies will still exist, but they’ll just be branch offices of a single trust. That is not the action of a capitalist who believes in going for the brass ring and world domination. They don’t like capitalism because they don’t want to waste their time competing. As Peter Thiel says, competition is for losers.

They also don’t like it because when companies are merged to a cartel, a monopoly, or an oligopoly, it’s really easy for them to decide what their common lobbying position will be. If there’s a hundred companies in a sector, they won’t even be able to agree on how to cater their annual meeting. But if there’s three of them, they’ll all show up before the the FCC [Federal Communications Commission] or the CRTC [Canadian Radio-television and Telecommunications Commission] or another expert regulator and say, “Hey, net neutrality doesn’t work” or “Opioids are safe.” It’s very easy for them to arrive at these common lobbying positions. And that’s a way for them to conspire to prevent other entities from entering the market. They can shape the regulation.

Google as a company kind of epitomizes all of this. Google is a company that made one successful product. They made a search engine and it was really good. And then they just had no other ideas. Everything they tried in-house was a failure. The exceptions are their Hotmail clone and the time they took the Safari code base that Apple had discarded and used it to make Chrome. Every other product that has succeeded is something they bought from someone else.

Their whole ad tech stack, their whole video stack, their whole server management stack, their whole mobile stack, docs, calendaring, maps, road navigation, these are all acquisitions. So, Google is like, “We are Willy Wonka’s Idea Factory, we’re geniuses who come up with ideas,” but the ideas that they actually come up with — Google+, Sidewalk Labs, the floating Wi-Fi balloons, even their RSS reader — they’ve all failed. Google Video was a failure. They had to buy YouTube to have a successful video offering.

Google has a lot of anxiety because they know in their hearts that it’s all a performance — that what they’ve got is access to the capital markets, not geniuses. There may be geniuses working at Google, but Google’s institutional structure does not allow genius to emerge. The genius that they have is operationalizing other people’s genius, and they’re definitely really good at that.

But that’s just another way of saying they’re a monopoly. Every monopoly has at the minimum some logistical competence, operational competence. Loblaws couldn’t raise the price of bread if it couldn’t get bread onto the shelves of all of its stores all over Canada. Obviously, Loblaws is good at that, and obviously Google is good at scaling up YouTube. They couldn’t be a monopolist if they weren’t. But we don’t make industrial policy just for operational excellence, we make industrial policy for innovation and progress. And Google sucks at that.

Google has never, since the development of its search, innovated; all they’ve done is operationalize. There is all of this hype now about AI, and it’s not the first time that Google has been stampeded into doing something because of hype. In the early 2000s, there was all this hype that the next frontier for American tech companies was China. Yahoo went into China and started ratting out its users to the politburo and having them arrested and tortured because that was what it took to stay in China. And Google, the “don’t be evil” company was like, “We’re going to do that, too.”

Even then, I think, they were like, “We don’t have any ideas. We can’t do better than Yahoo. We just have to do what Yahoo’s doing better than them. We’re not going to have a better idea than Yahoo has.” So, they went into China. Their users’ email got hacked, and their users got rounded up and tortured. Sergey Brynn, from what I’m told, was like “My family didn’t flee the Soviet Union for me to rat people out to the secret police.” And he just pulled the chute. He was just, like, “We are leaving.” And so they got out.

But now they’re getting stampeded by AI and it’s just the same process. There are all these people out there who are saying AI is the next big thing. You have the analysts calling it a $13 trillion business, by which they mean you can fire $13 trillion worth of workers and replace them with software. And Google, because they don’t know what a good idea looks like, determines whether an idea is good on whether or not other people think it’s a good idea.

So, “We have to go do AI.” And they know that their search quality has been falling off. It’s been falling off, I think, because the internal incentives for their product managers are to increase revenue. Each product manager, each executive, is like “My bonus, which is 5x my salary and determines whether or not my kids go to Harvard without accumulating debt, depends on whether or not I can increase the profitability of my business unit by 3 percent. And the way I do that is by enshitifying.”

That 3 percent does not come from making Google bigger because it has 98 percent of the search market. It comes from shifting surpluses. It comes from making the platform worse for either publishers or advertisers or users or all three. And for each of them, it’s like a little prisoner’s dilemma game. If only one of them was doing this, it would probably be fine. But because everyone is doing this, all of the products at Google march monotonically toward enshitification.

The whole platform quality is declining. Everyone can see it. Android kind of sucks relative to where it was a few years ago. YouTube is worse for viewers and for performers. Search is really fucking bad. And so, you’ve got Google going, “We just have to try something else. And the only something else we know about are things that other people have validated. So, we’re going to go try what other people are doing. And in this case, it’s AI.” As a result, you get to the florid chatbot confident liar, which is not a thing anyone wants, not a thing anyone’s asked for.

Yes, AI — which let’s just say here, is not artificial and not intelligent — it makes for a lot of great and fun party tricks and probably will make some interesting art and may automate certain parts of certain jobs in ways that makes them less shitty to do. But AI is not AI. We haven’t created robots that can answer our questions. As the eminent computer scientist they fired for coming up with this said, “We’ve created stochastic parrots.” All it amounts to is a party trick. And I like party tricks. I was at the Magic Castle last week and I saw a conjuror do an amazing mentalist and sleight of hand act that I’m still thinking about. It’s great. I love living in a world with party tricks, but the idea that the way that we solve searches is with a party trick is just manifestly wrong.

The AI Threat: Skynet or the Limits of the Limited Liability Corporation?

David Moscrop

Listening to this, I’m thinking to myself, a lot of the heat is focused on AI becoming self-aware or turning on us. We’re worried about Skynet, but there are far more mundane and insidious concerns about AI that we should recognize. Are those concerns, those sort of bigger, pie-in-the-sky concerns legitimate or are they distracting us from the more insidious, mundane concerns we face now?

Cory Doctorow

I think it’s something else altogether. I think those concerns are just displaced anxiety about limited liability companies. Where is the artificial life form that humans created to serve them, but which instead requires the humans to serve the artificial life-form? I think that a lot of Musk’s problems are a result of him thinking that he’s got a steering wheel with which he can drive his companies, but every time he yanks the steering wheel, the company doesn’t turn. They’ve got really bad handling. And the reason they’ve got really bad handling is the company has its own imperatives. It’s got shareholders, it’s got bondholders, it’s got workers, it’s got regulators, and it’s got customers.

I’ve written that the way to understand IP and the phrase IP — which is otherwise kind of nonsense because trademark and copyright and patent really aren’t the same thing — is that what IP means is any law or policy that allows you to reach outside the walls of the firm to control its customers, critics, or competitors. And those customers, critics, and competitors are the things that reduce the handling efficiency of the giant truck. That is your company. That’s what those are: the friction coefficients, the ice on the road, the bad weather, and the other drivers that mean that you can’t just drive your car wherever you want at the speed that you want to. And so, this is why firms are so obsessed with IP. They want to be able to act as unitary geniuses. They want to be protagonists in Ayn Rand novels and not people who live in a society who have constraints put on them — and not just by a duty to others, but also rivals.

Again, going back to Thiel’s “competition is for losers.” Why should I waste my time trying to rebut or displace the bad offers of other lesser men who are trying to sell my customers Whoppers when clearly the Big Mac is the best hamburger? Wouldn’t it be better if those people just weren’t there? Then I could concentrate on making the Big Mac better rather than trying to play games with Burger King.

I think that when people worry about Skynet, what they mean is the imperatives of business are driving the world to the brink of human extinction. From the vantage point of the people at the helm, they’re thinking, “It is driving me personally crazy, even though I’m the CEO and I’m nominally the person who’s in charge of it. I am miserable with every hour that God sends. I don’t know how to steer it even though I’m the person in the control room who built the control room and hooked up all the controls and they just don’t work the way that I’m told they do. My God, creation has come to life and it’s got a will of its own.”

That’s Skynet, right? That’s the limited liability company. Charlie Stross calls them slow AIs. They’re basically AIs with clock speeds that are really low, but they still accomplish the same imperative. Paperclip maximizing.