It seems there’s no end to unflattering headlines for Elon Musk’s car company, Tesla. Back in June, the Washington Post determined that Tesla’s Autopilot, the name the company gives to its artificial intelligence–operated driver assistance program, had been involved in 736 crashes since 2019 — more than four-hundred above what was previously known. Nearly two-thirds of the crashes happened over the past year, a rise that has coincided with the rollout of the company’s misleadingly named Full Self-Driving (FSD) software.
That report came four months after regulators forced the company to recall nearly 363,000 FSD-equipped cars and halt any new installations of the software, with the National Highway Traffic Safety Administration (NHTSA) deeming it a “crash risk.” This is all as reports of Teslas driving into stationary objects or abruptly stopping on the highway have made headlines over the past year.
It all begs the question: If Tesla’s “self-driving” cars are this unsafe, why were they able to be bought and used en masse on the road before being beta tested?
The answer: they are being beta tested, on US roads, by the ordinary men and women buying and driving them off the lot. And it’s all because of the markedly lax approach the United States takes to regulation.
Musk himself probably put it best: “In the US, things are legal by default,” he told investors last year. “In Europe, they’re illegal by default. So, we have to get approval beforehand. Whereas, in the US, you can kind of do it on your own cognizance, more or less.”
What’s Musk talking about? In the EU and countries like Australia, where governments tend to place more of a premium on protecting the customer by regulating industry, it has taken years for Tesla’s FSD to gain approval. Even upon finally getting it, the approval has come with strict conditions, including allowing only professional testers to drive them on roads and only a limited number of FSD-equipped cars being made available.
Things are very different across the Atlantic. As the NHTSA once explained to a Korean company looking for government permission to sell airbags in the United States, “no such permission is necessary,” since the agency “does not approve motor vehicles or motor vehicle equipment, nor do we . . . conduct pre-sale testing of any commercial products.” Instead, the law allows car companies to “self-ceritfy” that their products are safe in the United States, though the NHTSA may at times review a self-certification to make sure it is above board or if an investigation reveals something amiss.
In other words, US vehicle safety runs on something a few degrees removed from the honor system.
This means the NHTSA is more reactive than proactive, most likely to step in once problems start to rear their head — or, in the case of Tesla’s various scandal-plagued products, once dozens of crashes and injuries have caught public attention. This is even though Tesla warned the very week it rolled out the software that FSD “may do the wrong thing at the worst time, so you must always keep your hands on the wheel and pay extra attention to the road. Do not become complacent.”
As the Verge has pointed out, this is in stark contrast to how the federal government treats aviation safety, where, as common sense would dictate, airplane manufacturers have to get approval from the Federal Aviation Administration for their planes and components before pilots start zipping around the skies in them (even as tragically corrupted as this process has been in the recent past).
This has also allowed Tesla to carry out beta testing on the same public roads used by American schoolkids and other pedestrians, not by employing drivers specially trained for the job but by using ordinary car owners, whose FSD software then sends back data to Tesla for further refinement. For a while, Tesla determined which customers could qualify by having the software track their driving performance and issue a “safety score,” for which wannabe beta testers would have to reach and maintain a high enough value.
But last November, as lawsuits, NHTSA investigations, and even a criminal probe into FSD piled up, the company announced that anyone who owned a car and shelled out the $15,000 for the software could be a tester, regardless of safety score. Sure enough, Tesla’s website no longer mentions that safety score as a condition for beta testing — even as, by one count, FSD is a ten times more dangerous driver than a human one.
Renegade on the Road
It’s not entirely accurate to say that Tesla is merely taking advantage of lax US regulations. There’s evidence it’s also skirting what regulations exist.
In the state of regulatory anarchy created by the absence of federal laws on autonomous vehicles (AVs), some of the strictest rules are in California, which defines self-driving cars according to the characteristics laid out in Levels 3-5 in the Society of Automotive Engineers (SAE) International’s J3016 standard. Among other things, says University of Miami law professor William Widen, that standard requires that trained safety drivers carry out testing on AVs.
“The companies that are testing Level 3 and Level 4 vehicles have all been doing it mostly on computer simulations and on test tracks, and they have been doing limited testing on public highways,” says Widen. “But even when they’re testing on public highways, they, depending on the state they’re in, they might need permission.”
Tesla gets around all this by claiming its cars are only SAE Level 2, defined as “partial driving automation.” But Widen and Carnegie Mellon University professor Philip Koopman have argued that even based on Tesla’s own description of FSD, they should be classified at a higher level.
A big part of how that classification is decided is the “design intent,” which applies regardless of whether or not the vehicle is actually capable of doing what it’s been designed to do. In other words, the fact that Tesla’s FSD-operated cars at this stage need constant human supervision and have a tendency to plow into stationary objects doesn’t matter as much to regulators as the fact that the cars are meant to one day be fully self-driving.
And when it comes to promoting their product, Tesla and Musk have made it very clear this is exactly what they intend the cars to do.
As early as 2016, Musk announced that all Tesla cars were being equipped with hardware that would eventually let the cars drive by themselves and that it would reach that point through “millions of miles of real-world driving.” Tesla’s lawyers have called it a “long-term, aspirational goal” in a class-action lawsuit charging fraud over Musk’s years-long (and constantly broken) promises that the cars would be self-driving in a year or two. Just last month, he claimed Tesla was “very close to achieving full self-driving without human supervision . . . maybe what you would call [SAE Level] four or five, I think later this year.”
When a CNN reporter tried the FSD software on busy New York city streets, he found himself having to quickly intervene to stop the car from suddenly veering into oncoming traffic or driving into obstacles. But there’s no guarantee that all of Tesla’s increasingly indiscriminately chosen “beta testers” will show the same care and alertness — as reports of Tesla drivers crashing and dying while playing video games or otherwise failing to have their hands on the wheel show.
It’s not totally surprising that an unscrupulous car company might engage in deceit and even endanger the public to avoid government regulations. The bigger question is why California regulators are allowing Tesla to get away with it. For Widen and Koopman, the fear is that “if this misapplication of law and regulation remains unchallenged, the risk remains that other AV industry participants, not only Tesla, may use this ‘loophole’ to gain some advantage at the expense of safety.”
Driving Into the Abyss
Yet even California’s regulator inaction is leagues ahead of most of the country, with most states having nothing on the books to regulate AVs — or worse.
“Some state laws are not designed to be protective, but to give the company the comfort they can test on public highways in the state,” says Widen. “It’s not really consumer- or public-friendly, but company-friendly.”
Nevada, for instance, has long had a reputation for an AV industry-friendly regulatory landscape, allowing vehicles at all levels of automation to operate on public streets and requiring them to meet a “minimal risk condition” — meaning if the car can’t pull off a particular driving task, that it bring itself to a “reasonably safe state,” which might mean stopping. There, robotaxis have been deployed in downtown Las Vegas, and Mercedes-Benz recently became the first company to roll out an admitted Level 3 vehicle onto US streets, which gives human drivers the option to play video games on the car’s computer screen. (California recently followed suit.)
In Arizona, the governor weakened AV regulations by executive order in 2018 as a sweetener for the AV industry, including the requirement that they have a human driver behind the wheel. Two weeks later, a Tempe woman became the first known human killed by the technology, when a self-driving Uber proved unable to recognize jaywalkers while its human driver streamed the TV show The Voice.
There are also questions about where AVs are being properly tested. The NHTSA’s non-comprehensive test-tracking tool shows the majority of on-road testing taking place in California, Arizona, and Michigan, and is well-represented in Florida. According to the Wall Street Journal, hundreds of self-driving cars are now being deployed in not just California but Miami, Dallas, Austin, and Nashville. But apart from Michigan, these locations may not offer self-driving cars the same kind of conditions that dominate in other parts of the country, at a time when the vehicles still struggle to operate in heavy rain.
“They don’t work well in the rain, they don’t work well at night, and I don’t know a single one that works in snow,” says Widen. “When the lane lines are covered up, it presents a problem. How that will work in practice I don’t know — it doesn’t rain or snow in Vegas.”
Meanwhile, in some cases, state governments have moved to preempt more stringent local regulations on self-driving cars. The National Conference of State Legislatures (NCSL) lists eight such laws on the books, not including a Pennsylvania law permitting driverless AV testing signed late in former Governor Tom Wolf’s term. Such bills have often been supported and worked on by a ride-share industry keen to save labor costs by cutting the human driver entirely out of the equation.
No Future but What We Make
Traditionally, transport regulation has been split along a federal-state divide, with the NHTSA regulating the “safety design and performance aspects” of vehicles and equipment, and states regulating vehicle operations and the actual drivers themselves. But that may be changing as self-driving cars blur the line between equipment and driver, with Congress and federal agencies showing mounting signs they’re ready to step in.
The question is what form this federal intervention will take. Will the model be states like Nevada, where more permissive regulations exist to appease the industry? Will it be California, where theoretically stricter laws sit idle thanks to unwilling regulators? Or will it be a regime closer to that of Europe, one that carefully allows and actually enforces the development of this exciting but dangerous technology and ensures it’s truly up to snuff before it’s deployed at a large scale on public roads?
That will be a political battle of its own. But until then, the Elon Musks of the world are more or less free to use as guinea pigs the very US public that pays for the roads and highways they use as an open-air laboratory, with deadly results.