28 Comments
Oct 27, 2023·edited Oct 27, 2023Liked by Gary Marcus

Driverless cars are bad because cars are bad. I'm glad to see their false promises of road safety come to an end so we can stop risking lives in pursuit of products that don't exist and do the things that we know make roads safer for everyone: traffic calming, road design for visibility, lower speed limits, walkability, and most importantly: building mass transit that people want to use.

Expand full comment

Yes. Absolutely 100%.

Expand full comment

Matthew B. Crawford has an excellent book on why we absolutely don't want driverless cars: "Why We Drive".

Expand full comment

I don’t think you are anti-tech. I took a class on AI and neural networks a couple years ago and was shocked by their inherent limitations. Not at all what I expected. Keep up the good work.

Expand full comment

With the current tech stack (neural networks, no world model and no reasoning) it's indeed impossible to make driverless vehicles safe enough. However, I am even more optimistic, once the current AI paradigm (neural networks) gets the thumb down, a new one will quickly emerge (I am putting my money on variational Bayesian methods) and we might see a real self-driving car within a decade. The problem is that Google is not going to pull the plug so easily because it would be a huge reputational damage for them. As long as they keep getting tens of billions of free cash from advertisers all over the world they can continue the current AI circus indefinitely. It would take a world wide recession to force their hand, luckily we might be heading for one.

btw, I just loved the clown icon next to Tesla - matches their way of operation perfectly :)

Expand full comment

Driverless cars highlight the insane complexity that a 16-year-old human can navigate quite successfully whereas AI still cannot. But one challenge is that these cars are needlessly complex because we are only focusing on autonomy at the Car System. Instead, if we started to design our road infrastructure today to begin the transition, we could have fleets of driverless cars very quickly AND improve the cars driven by humans.

Think about this: If we made cars networkable with sensors in the roads these cars could be 'told' speed limits (instead of having to use complex machine vision). They could be informed of other cars and their behaviors (instead of challenging LIDAR). They could be alerted and/or queued/sequenced through traffic lights reducing congestion (Google Maps is already helping here but there's nothing else). The list goes on and on.

All of these road infrastructure improvements would also help human drivers where the car could alert to erratic drivers, warn of buckles in traffic, and more.

Oh, and it improves safety all around!

Expand full comment

I'd always imagined driverless vehicles having infrastructure built to purpose. The idea of the vehicle having complete autonomy and attempting to mimic human drivers never occurred to me, except perhaps in a few niche science fiction scenarios.

Expand full comment

Yet that's how they are being designed.

Expand full comment

Yes, thank you. We need to bring road infrastructure into consideration.

Expand full comment

I've been using Comma.ai for many years now, upgrading every few years to newer models. They're profitable (10M+ revenue), run a tight ship and very likely to outsurvive the billion dollar+ burn/year behemoths. Great product. While not fully driverless, it does a phenomenal job at highway driving.

Same as midjourney, <15 engineers, making 100M+ in revenue and wildly profitable.

AI will deffo help a set of people go from rags to riches.

There is still a good chance OpenAI may burn too much money before making a profitable product.

Too much money is a curse and a blessing at the same time.

Expand full comment

What if driverless cars are just a fundamentally bad idea?

We might be able to solve the driverless car problem in good conditions, but what about marginal conditions? Night, rain, snow, ice? Driving in the dark, with rain and oncoming traffic on an unknown, unmarked road?

Seems like the result would be drivers who don't get experience in good conditions, and then are forced to drive in marginal conditions. Seems like a bad idea.

As a side note, 11,668 of the 42,939 USA traffic fatalities in 2021 were people who didn't wear their seat belts. This is a behavior issue.

Sure, a driverless car could refuse to go when the occupants didn't put on their belts, but that is a slippery slope. And how long is it before the driverless car won't start in marginal conditions?

Expand full comment

Cars itself kill the planet. While a bunch of people quibble about the most inane things like control or the lack of .. preposterous or business as usual.

Expand full comment

Yes, we will eventually have driverless cars but, unless the AI community can make a robot that can walk into an unfamiliar kitchen and fix a meal, full self-driving vehicles will not become a reality. True intelligence with common sense will be needed.

I'm not even sure that self-driving cars are a good idea even if we solve the intelligence problem. It will probably be better to train intelligent humanoid robots to drive non-autonomous cars on public roads. There is something to be said about a robot driver that can exit the vehicle to offer assistance to the elderly/disabled or clear debris off the road after a storm. Also, solving AGI will not eliminate accidents completely. When accidents do happen, having a robot driver that can move around to offer assistance will be beneficial.

Expand full comment

> If that’s true, what works in a certain road in San Francisco might not generalize all that well to other places

And conditions on roads can change literally by-the-minute so hand-written rules seem even more dangerous than the average ape.

Expand full comment

Still I find that after some research, there's a lot of misunderstanding about human errors while driving (not) autonomous cars. Most of articles point to this study, which doesn't seem to me very well-done, and defends that 99% of autonomous driving vehicles incidents are due to human errors https://www.iotworldtoday.com/transportation-logistics/human-error-causes-99-of-autonomous-vehicle-accidents-study.

Meanwhile, popular links defend that 90% of accidents in piloted cars are due to human errors. But this terrain is quite slippy https://www.nature.com/articles/s41599-020-00655-z. So which are the real improvements and benchmarks that robotaxis could overcome?

Expand full comment

Just look at Apple - they have USB-C on iPhones - why not to use the same approach for auto navigation systems and make them a standard equipment on every car.

Expand full comment

I am grateful to have cleared Waymo's waitlist, and I now find myself never taking Uber/Lyft anymore as the experience (and safety) is so much better.

It reminds me of the early 00s when I would wait 10 minutes for an Uber while watching a series of empty San Francisco cabs go by (SF cabs were, at that time, completely unsupervised and providing on the whole terrible service).

As to safety, not only I have witnessed (as a passenger) my Waymo avoiding a collision with a car pulling out of a parking spot which I would not have avoided had I been at the wheel, but if I understand the math of this insurance industry paper correctly Waymo has already saved 4.2 human lives vs. the average driver: https://arxiv.org/ftp/arxiv/papers/2309/2309.01206.pdf

I hope the economics work out for Waymo; I do note that Uber sunk $32 billion (for a product with marginal barriers to entry) before turning profitable.

Expand full comment

I think about driverless cars this way...

1. A driverless car is probably more reliable than a human for 99% of all driving. Robots don't get tired, drink alcohol, take drugs, or get upset after breakups. They should actually be safer for all the routine stuff.

2. But robots aren't perfect. They are only as good as their programming. And the universe is a complex place. There are odd roads, weather events, reflections of the sun off buildings, cars, and other objects. A never-ending stream of curve balls that, according to the law of large numbers will come up at some point. Those are the 1% cases. Or maybe they're only 0.1%, or 0.01%. Those cases are the black swans of the driverless car world. And because of this, they can't be trained for. To deal with them in anything close to a sensible manner requires TRUE intelligence. Now, it's certainly the case that over time we'll shrink these cases down to a smaller and smaller set. The tech will get better. But it won't ever hit zero. The best we can hope for is that the number of deaths in these cases is smaller than whatever the number of deaths was when humans were running the show.

3. All the above is a tech argument. I think the biggest issue with self-driving cars is who holds the liability when things go wrong. In other words, the problem is a legal one. Imagine a scenario where a self-driving car gets in one of those 1% cases, mades a bad decision, and kills somebody. Who does the family of the deceased sue? The car owner? The manufacturer? The programmer? Until we work through the legal questions, the tech questions are almost irrelevant.

Expand full comment

In my youth I imagined the driverless cars of the future as have infrastructure built in for them. Wireless communication to guide them through traffic lights and such.

Later, after learning about emergent behavior, I considered the possibility that AVs would have means to communicate with each other and leverage swarm behaviors. At the very least that might have prevented an entire fleet of modified Chevrolet Bolts showing up and stopping tin the same place. They might even be able to help each other make an unprotected left turn.

Expand full comment

Hi Gary, I wrote about this a while back: https://www.linkedin.com/pulse/fixing-autonomous-car-model-barry-briggs/. TL;DR the goal of fully "autonomous driving" is unattainable; we are thinking about it wrong. Instead we should think of it like a student driver -- it can handle many, perhaps even most situations but occasionally an adult has to step in.

Expand full comment
author

latency is a problem if you do it remote, and if there is a human in the loop they will get bored if they don’t step in frequently. (see substack where i discussed Mackworth and Lex Fridman)

Expand full comment

True; honestly hadn't thought of the "remote" use case (not even sure why such a scenario would be useful. Remote-controlled taxis? No thanks!) The reality is, IMHO, for the foreseeable future semiautonomous drivers will require someone in the car paying attention -- just like a parent with a teenaged learner.

Expand full comment

As Gary notes, attention is a significant issue. Especially since your advise to the car may well be time critical with seconds to respond.

But even if the human can maintain their attention where does the human get the experience to advise the car? As well as act continuously as a teacher?

You and I might take this for granted, I have ~500,000 miles behind the wheel. But if you own a driverless car or take Uber all the time, your advice is like the backseat driver.

(Makes me wonder how many of the people trying to code the driverless car drive at all...)

Expand full comment

This is an excellent point and suggests at some point in the future there will be a "tipping point" when software has more experience driving than humans. Still, with the current machine learning paradigm -- zillions of hours of training against real or simulated situations -- no matter how much you feed it there will always be scenarios that the inferencing engine has never seen before. Simply coding it to say, "What do I do now, human?" in those situations would be a useful part of the program.

Expand full comment

The idea of software accumulating more experience is very attractive. A high quality driverless program would be very valuable. (Although it might suffer the same fate as carburetors that got 50 mpg but were suppressed by big oil. -- tongue in cheek)

In 1912 shortly before he died, Wilbur Wright had this to say about solving the problem of human flight:

"This was the fact that those who aspired to solve the problem were constantly pursued by expense, danger and time. In order to succeed it was not only necessary to make progress, but it was necessary to make progress at a sufficient rate to reach the goal before money gave out, or before accident intervened, or before the portion of life allowable for such work was past."

I think this could be applied to the effort to develop the driverless car. Wilbur certainly writes with more clarity than I can about the development cycle.

The "tipping point" might not happen if driverless cars have accidents that sour the public from accepting them.

Expand full comment