36 Comments
Jun 16, 2022Liked by Gary Marcus

the peskiness of RL, so different from simulations, curve fitting can never solve it, even causality won't help, we need a model building loop, and demote the DL stuff to a useful signal to add to the others

Expand full comment
Jun 15, 2022·edited Jun 15, 2022

Theres a far bigger question that needs answering I'm my opinion: As a society, do we actually NEED FSD technology? Maybe what we really need is a super efficient public transport infrastructure that would obviate to need to solve the potentially unsolvable and obviate the rhetoric of safety concerns because it would naturally lead to less road use and therefore less deaths. For those still concerned about those on the road, it has been shown that the two lane roundabout can reduce the biggest cause of accidents which occur on intersections, by up to 80%. The experiment has already been run. They're relatively cheap passive devices that require little attention in terms of maintenance, whats not to like? Oh, yeah, they're not very technologically sexy.... FSD is not the solution, what's needed is a societal reframing.

Expand full comment
May 2, 2023·edited May 2, 2023Liked by Gary Marcus

"Does that mean that society should give up on building driverless cars? Absolutely not. In the long run, they will save lives, and also give many people, such the blind, elderly, and disabled, have far more autonomy over their own lives."

Speaking as someone with many years in the business of human passenger transport, I can attest that there's much more to the process than simply navigating a vehicle safely- especially when dealing with "the blind, elderly, and disabled", but also with anyone who isn't up for packing their own luggage or groceries in the car trunk. Or anyone who's too intoxicated to make it up their front steps and unlock their door, or to even be sure of having arrived at their own home address or intended destination.

Then there are those who want to make an impromptu stop, or change their destination on a whim- I suppose that response to voice commands is theoretically possible, but that might also require some programming of the human passenger to make their request explicit, and thoroughly accurate; I don't feature AI following a pointing finger very well, or hearing every drunk well enough to make an accurate inference, or being able to untangle an inaccurately parsed address- it's quite common for passengers to mix up "street" and "avenue" addresses, for example. The machine is also liable to have difficulty with drawing the proper inference for "that 7-11 around the corner", and will probably request a different prompt. (And what it the passenger has the store name wrong, or is just saying something generic, like "kwik-mart"?)

Also, humans do sometimes need to make emergency impromptu roadside stops- to vomit, most commonly. How are they going to break the news to Waymo? For that matter, how does AI process a request like "pull over now!"...? Requests like those present a dilemma for a human driver as it is. But they're most often resolved through some fast verbal interaction. Between humans. Gesturing is often a crucial component in that communication.

I have some other reservations about the limits of fully autonomous AI. It's obvious that the more autonomous the vehicle capability, the greater the grid of surveillance and control required for operation in high-traffic settings, particularly on surface streets in urban areas. I'm not sold on the panopticon implications of such a system. I'm not confident that there's much in the way of antifragile fail-safe features that can be engineered into a 1:1 scale real world application of such a grid, either. I'm skeptical that enough instant feedback mechanisms can be incorporated into such a grid to compensate for the consequences of a single point of failure. Vehicle traffic- including the reaction of other vehicles to accidents- is all about cascading effects that involve large heavy physical objects traveling at relatively high rates of speed. The "futuristic" mode of vehicle transportation shown in the film "Minority Report" is a child's fantasy, hmm?

Having read the news accounts of AI driving technology for many years, I still have the impression that there's been no significant input from the humans who have actually done the jobs that the AI is being touted to replace.

I also notice that most touters of fully autonomous AI driving are viewing it exclusively from the standpoint of the advantages that it promises for their personal circumstance (in the moment), while imagining that their case is universal. Their personal case may not even be all that usual.

All that said, I'm a big booster of AI-assisted driving. AI assist has the potential to lower the accident rate dramatically. It's the over-engineering that I have problems with. That perennial temptation.

Expand full comment

I feel like this is a pretty harsh take on Tesla, coming from someone who is not a fan either and also believes that their current trajectory looks like they'll fail. The first part correctly establishes that there's no denominator, but the second one tries to establish days - days, however, are pretty irrelevant. Tesla has arguably one of the largest fleets with Level 2 and encourages its use a lot, therefore it's quite reasonable to assume that their miles driven are the highest. The data also doesn't include the severity of the accident and, unless I missed something, it doesn't include whether the Level 2 car was actually at fault. Sure, 3 crashes every four days sounds scary, but if you look at their fleet size and how many car accidents happen every day overall, it's quite possible that it's still a good number - but without a good denominator, we simply don't know.

I really don't think Tesla will solve Level 5 with cameras and I despise their marketing, too, but, as the article correctly points out, we simply can't draw useful conclusions about Teslas Level 2 safety from that one number.

Expand full comment

Gary, I’m confused by the take-away here. We all agree that

(1) not knowing the denominator makes the numbers useless as comparators: Tesla might be way better than any other company, or way worse, depending on miles driven, for example.

(2) it’s just as likely that L2 crashes are caused by human error than AI fail precisely because they’re L2. L2 doesn’t imply “the computer drives, the human can overrule, but the computer prevents the driver from taking any action that causes a crash” -- if it did then L2 crashes would necessarily be the computer’s fault and thus evidence against AI.

If there is something to be learned about L5 it might be the *change* in accident *rates* from no-computer to L2 “all other things being equal”. If rates go down we see the machine outperforming humans in relevant ways.

But even there I think you and I (at least) would agree that inductive reasoning about AI capacities is problematic for all the good reasons.

Expand full comment

I think Tesla will solve Full Self Driving, or get close enough for it to be useful. Musk is unreasonably optimistic but I put that down to just a marketing ploy. Musk's latest statement about Tesla being worth "zero" if it doesn't achieve FSD will only be true if other car companies attain FSD and Tesla doesn't. That seems pretty unlikely. Perhaps this is his way of saying that Tesla is way ahead on FSD (even if it isn't) and, therefore, he wants to make FSD the basis on which cars and car companies are valued.

I totally agree that the data should be more abundant and public. If they are going to drive on our roads and endanger our lives and property, they should be forced to make all this data public. I don't follow the field closely but I'm hoping such legislation is in the works.

FSD will only succeed if the public can be made to tolerate different failure modes of FSD compared to human drivers AND FSD is better in overall death and accident rate compared to human drivers. I suspect it will get there. FSD will continue to make stupid mistakes but, hopefully, they never make the same mistake twice because the incident can be immediately incorporated into their training.

Expand full comment

what does the rate look like for human drivers? Since that's the obvious benchmark for this software.

(obviously we don't have complete data on the software so it's annoying to make the comparison)

Expand full comment

Full self-driving suffers a long-tail problem. Even if it becomes statistically safer than manual driving, even to a substantial extent, vivid images in the news of the inevitable spectacular accidents that occur due to an AI misfire will assume outsized importance in people and politicians’ minds. It will join the ranks of other rare events like terrorist attacks and nuclear power plant accidents that juice up our fears in disproportion to their likelihood. Marketing is going to be tough and perhaps impossible.

Expand full comment

I agree we may still be far from Level 5 autonomy in rain and snow, etc.

But the article is very one-sided and appears to imply that Teslas are more dangerous than normal cars, when in fact the opposite is true.

Here’s just one article: https://www.google.ch/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjovqy5ubH4AhUFg1wKHUNpCFYQFnoECA8QAw&url=https%3A%2F%2Felectrek.co%2F2022%2F05%2F27%2Ftesla-owners-less-likely-crash-than-their-other-cars%2F&usg=AOvVaw1nPleT3bEN6v5bWAyY2Isk

A fair comparison would be to look at accident statistics in similar circumstances (e.g. highway) per 1 million miles driven with Tesla Autopilot vs. cars without driver assist. I am sure you will find that Tesla Autopilot is in fact much safer than the average human driver. The data would also have to be cleaned from instances where Autopilot was overridden by the driver.

The interesting thing about above quoted study is that the drivers are actually the same, i.e. there is no grounds to suspect bias that Tesla owners are on average more responsible drivers.

Expand full comment

Great read! and thanks for the footnote... it's baffling that the country that invented all those unnecessary warning labels allows the name "autopilot"

Expand full comment

I think you get a bit mixed up between FSD (the beta test of full self driving which is of course nothing of the sort) and “vanilla” autopilot (assisted driving on highways). ADS crashes are part of the second not the first. Autopilot is available on nearly all Teslas. This is a not and attempt to normalise and calculate accident data: https://engrxiv.org/preprint/view/1973/3986

Expand full comment

Also, we don't need to give up - a better alternative might be to instrument the roads instead, and use the collected data to route vehicles - the cars need to only be intelligent enough to be driven around, rather than actively making their choices. But even with this, outliers will remain a problem. Just like train and tram tracks and surroundings are off-limits by law, this would unfortunately entail outlawing everything that would make it unsafe for the not-self-driving vehicles.

Expand full comment
Jun 15, 2022·edited Jun 15, 2022

"Will Tesla ever “Solve” Full Self Driving?

And will it survive if it doesn’t?"

Yes, of course it will survive. The reason is that Tesla is not a philosophy company but an engineering company. They are *great* engineers. What I have observed is that they take whatever existing science and engineering offers (as in the lousy batteries that existed when the company was started) and engineer professionally -- taking advantage of scientific breakthroughs (often created by others) and incorporating them as well as making their own improvements.

I am very critical of most technology products including the hardware/software on appliances and automobiles. However, I was overwhelmed with how well the Model 3 I have been using for 2+ years worked -- the hardware and software worked!! It actually worked!! (Unlike just about all my other experiences with software designed for appliances and autos.) I obtained Full Self Driving a month ago (I'm in Canada and the rollout was slower here) -- wow. It works great... well as great as to be expected for a deep learning system, and it will get better. But given what exists in deep learning technology (which somehow the entire lay and scientific world confounds with "AI") it is a masterful piece of engineering.

What will happen with outliers? What any good engineering company will do -- they will see in the coming years that there are alternatives to deep learning for producing real AI and they will purchase/copy/incorporate/etc the technology and their cars will use causal logic to deal with 99.9999% of the outliers and their cars will have close to perfect full autonomy. There is little logical reason this will not be the outcome in the coming five years :)

Expand full comment

Instead of having my grown kids taking my car keys away in a few years, I’m looking forward to FSD, summon, and voice commands. If there’s a margarita maker plugged into a USB all the better. Based on publicly available data, Teslas are the safest cars on the road per unit distance driven, and more so when employing driver assist.

Expand full comment

Let's say we did fully solve self driving, as defined as self driving cars being affordable to the masses, and as safe or safer than human driven cars.

That's not automatically a good thing just because the technology works and many benefits are delivered. With any technology the fact that the technology delivers benefits is largely useless information on it's own. What does matter is the relationship between the benefits and the price tags. Does the technology provide a net benefit to society as a whole when all factors are considered?

Fully self driving vehicles would put tens of thousands of truck drivers and Uber drivers out of work, and probably lots more people that don't come to mind at the moment. We need to consider the pain delivered to these people, and the pain that they may deliver back to society in return. If large numbers of people are left behind by technological progress they can conclude they have little to lose, and they may start taking undesirable radical action like voting for hyper confident con men promising to "make America great again".

What's often not considered when discussing technological advances is that a critical mass of people need to be able to keep up in order for progress to proceed safely. If too many people are left behind too quickly, we can't count on them to go quietly die in some run down dumpy trailer park.

We all go forward more or less together, or we probably don't go forward.

Expand full comment

We humans drive on account of our growing up in the world, having experienced gravity, acceleration, other people, obstacles, bumps, slips, hazards etc etc etc. Every SDC ever, attempts to fake such knowledge by using data to classify what's going on. As with other kinds of ML, it's fakery. The devil is in the details. We humans will do the 'right thing' every time in an unusual situation, unless we are incapacitated (drunk, sleepy etc). The machine would have no clue about a situation it hasn't encountered in training. In other words, unknown unknowns are not a problem for us, but they are, for SDCs. Each new case will lead to yet another accident. The bulk of the world doesn't look like sunny CA with blue skies and well marked traffic lanes and well behaved pedestrians etc. Musk said that in 2020, an SDC would drive itself from NY to a customer in CA! It is beyond misleading, it's dangerous and irresponsible - to express/promise such baseless things.

Expand full comment