36 Comments
Jun 16, 2022Liked by Gary Marcus

the peskiness of RL, so different from simulations, curve fitting can never solve it, even causality won't help, we need a model building loop, and demote the DL stuff to a useful signal to add to the others

Expand full comment
Jun 15, 2022·edited Jun 15, 2022

Theres a far bigger question that needs answering I'm my opinion: As a society, do we actually NEED FSD technology? Maybe what we really need is a super efficient public transport infrastructure that would obviate to need to solve the potentially unsolvable and obviate the rhetoric of safety concerns because it would naturally lead to less road use and therefore less deaths. For those still concerned about those on the road, it has been shown that the two lane roundabout can reduce the biggest cause of accidents which occur on intersections, by up to 80%. The experiment has already been run. They're relatively cheap passive devices that require little attention in terms of maintenance, whats not to like? Oh, yeah, they're not very technologically sexy.... FSD is not the solution, what's needed is a societal reframing.

Expand full comment

Yes, roundabouts can be mathematically efficient for certain volumes of traffic (I am not a traffic mathematician but it seems reasonable within boundaries). But you are wrong about FSD. The FDS technology is amazing. By the way, we have hardly any runabouts in the area of Canada where I live and last week my Tesla FSD encountered a runabout -- it handled it perfectly, better than I could. FSD makes driving wonderful. A world with FSD and efficient, inexpensive EV personalized transport is an order of magnitude better than a world with delusional efficient public transport.

Expand full comment
Jun 15, 2022·edited Jun 15, 2022

I asked whether FSD is NEEDED not whether it is DESIRED. Societal considerations need to be based on need not want. The roundabout concept can cope with any volumes of traffic, it improves traffic flow, reduces wear and tear on vehicles thereby reducing maintenance costs and pollution, speeds up journey times, reduces greenhouse gas pollution. Win, win, win, win. You'd have to be crazy or deluded to not be an advocate for them. It's a no brainer when it comes to comparing personal vehicles and mass public transport in terms of pollution, efficiency and cost of travel. Seems to me you are blinded like many others to the promise of the sexy new technology...

Expand full comment
Jun 15, 2022·edited Jun 15, 2022

Hi Salvatore,

Yes, I agree with you that the AI geek inside me is overwhelmed that during my lifetime an albeit imperfect but still self-driving car is driving me around.

However, I am *not* against roundabouts. If indeed they prevent accidents and allow full traffic flow, then why I have seen so few of them in Montreal and Toronto, as well as so few of them on trips to the USA? I am not knowledgeable in the mathematics of traffic flow, but take your word that these structures are as good as you say. Hence, the puzzlement why not more adoption of them.

Also, I try to use public transportation near where I live, but a 20 minute car ride becomes 2 hours via public transport which competes with about 2 hours to walk to work. I'm not in a great location for efficient public transport, but for other people and other cities, sure, it can be very efficient, e.g., NYC where everything is connected.

Expand full comment
Jun 15, 2022·edited Jun 15, 2022

The experiment was run in a North Midwestern US province, can't remember exactly where. The mayor took it upon himself to have intersections replaced with roundabouts against the wishes of the majority of the population. Results showed that road accidents as a whole were reduced by up to 80% and average journey times reduced. The populace eventually got over their fear of the roundabout and the whole scheme was consired a roaring success. The reason we don't see the roundabout widely adopted, is political fear coupled with public intransigence. Oftentimes it just takes political balls to grab the bull by its horns.

Expand full comment

Wow. Cool.

Expand full comment
Jun 15, 2022·edited Jun 15, 2022

Yes, big cities are generally better served. I live in London. I haven't owned a car for over 20 years and can travel easily in London and up and down the country to major cities and towns via public transport with ease. Driving in London is a horrendous nightmare, if you do it voluntarily, you'd literally have to be mad or a masochist 🤣.

Expand full comment
Jun 16, 2022·edited Jun 16, 2022

As someone with elderly relatives who are slow to give up the keys, the mobility it would provide to large segments of society would be enormously helpful. Not everyone can use public transit easily, nor does it reach everywhere.

If the technology ever became good enough, a big if, it would also revolutionize transportation and shipping. Some of this would be tragic if this happened before driving jobs could be replaced. But the net economic benefit in cold productivity terms would be great.

Not to mention, when I am driving, I am doing nothing else. Boy would I prefer to be able to read or do work.

Expand full comment
May 2, 2023·edited May 2, 2023Liked by Gary Marcus

"Does that mean that society should give up on building driverless cars? Absolutely not. In the long run, they will save lives, and also give many people, such the blind, elderly, and disabled, have far more autonomy over their own lives."

Speaking as someone with many years in the business of human passenger transport, I can attest that there's much more to the process than simply navigating a vehicle safely- especially when dealing with "the blind, elderly, and disabled", but also with anyone who isn't up for packing their own luggage or groceries in the car trunk. Or anyone who's too intoxicated to make it up their front steps and unlock their door, or to even be sure of having arrived at their own home address or intended destination.

Then there are those who want to make an impromptu stop, or change their destination on a whim- I suppose that response to voice commands is theoretically possible, but that might also require some programming of the human passenger to make their request explicit, and thoroughly accurate; I don't feature AI following a pointing finger very well, or hearing every drunk well enough to make an accurate inference, or being able to untangle an inaccurately parsed address- it's quite common for passengers to mix up "street" and "avenue" addresses, for example. The machine is also liable to have difficulty with drawing the proper inference for "that 7-11 around the corner", and will probably request a different prompt. (And what it the passenger has the store name wrong, or is just saying something generic, like "kwik-mart"?)

Also, humans do sometimes need to make emergency impromptu roadside stops- to vomit, most commonly. How are they going to break the news to Waymo? For that matter, how does AI process a request like "pull over now!"...? Requests like those present a dilemma for a human driver as it is. But they're most often resolved through some fast verbal interaction. Between humans. Gesturing is often a crucial component in that communication.

I have some other reservations about the limits of fully autonomous AI. It's obvious that the more autonomous the vehicle capability, the greater the grid of surveillance and control required for operation in high-traffic settings, particularly on surface streets in urban areas. I'm not sold on the panopticon implications of such a system. I'm not confident that there's much in the way of antifragile fail-safe features that can be engineered into a 1:1 scale real world application of such a grid, either. I'm skeptical that enough instant feedback mechanisms can be incorporated into such a grid to compensate for the consequences of a single point of failure. Vehicle traffic- including the reaction of other vehicles to accidents- is all about cascading effects that involve large heavy physical objects traveling at relatively high rates of speed. The "futuristic" mode of vehicle transportation shown in the film "Minority Report" is a child's fantasy, hmm?

Having read the news accounts of AI driving technology for many years, I still have the impression that there's been no significant input from the humans who have actually done the jobs that the AI is being touted to replace.

I also notice that most touters of fully autonomous AI driving are viewing it exclusively from the standpoint of the advantages that it promises for their personal circumstance (in the moment), while imagining that their case is universal. Their personal case may not even be all that usual.

All that said, I'm a big booster of AI-assisted driving. AI assist has the potential to lower the accident rate dramatically. It's the over-engineering that I have problems with. That perennial temptation.

Expand full comment

I feel like this is a pretty harsh take on Tesla, coming from someone who is not a fan either and also believes that their current trajectory looks like they'll fail. The first part correctly establishes that there's no denominator, but the second one tries to establish days - days, however, are pretty irrelevant. Tesla has arguably one of the largest fleets with Level 2 and encourages its use a lot, therefore it's quite reasonable to assume that their miles driven are the highest. The data also doesn't include the severity of the accident and, unless I missed something, it doesn't include whether the Level 2 car was actually at fault. Sure, 3 crashes every four days sounds scary, but if you look at their fleet size and how many car accidents happen every day overall, it's quite possible that it's still a good number - but without a good denominator, we simply don't know.

I really don't think Tesla will solve Level 5 with cameras and I despise their marketing, too, but, as the article correctly points out, we simply can't draw useful conclusions about Teslas Level 2 safety from that one number.

Expand full comment

Gary, I’m confused by the take-away here. We all agree that

(1) not knowing the denominator makes the numbers useless as comparators: Tesla might be way better than any other company, or way worse, depending on miles driven, for example.

(2) it’s just as likely that L2 crashes are caused by human error than AI fail precisely because they’re L2. L2 doesn’t imply “the computer drives, the human can overrule, but the computer prevents the driver from taking any action that causes a crash” -- if it did then L2 crashes would necessarily be the computer’s fault and thus evidence against AI.

If there is something to be learned about L5 it might be the *change* in accident *rates* from no-computer to L2 “all other things being equal”. If rates go down we see the machine outperforming humans in relevant ways.

But even there I think you and I (at least) would agree that inductive reasoning about AI capacities is problematic for all the good reasons.

Expand full comment

I addressed this in my own post linked below. The much better denominator to use is miles. We can estimate that Tesla drivers covered at least 1 billion miles during this timeframe in the US, based on previous numbers Tesla has issued. This cannot give us an exact number because it's an estimate, but it actually lines up pretty closely with Tesla's own published accident rate figures.

I would like to have better data. But the data we have does not indicate any kind of safety problem with Autopilot. The Ford numbers are potentially more concerning (but also need clarification from Ford).

https://brandonpaddock.substack.com/p/nhtsa-data-backs-up-teslas-own-autopilot?utm_source=twitter&sd=pf&s=w

Expand full comment
author

safety problem compared to what? the numbers do not suggest that you could simply run current tesla techniques at scale at L5. Tesla doesn’t believe that and you shouldn’t either. The fact that they might better eg than Ford isn’t much consolation if it’s not good enough for prime time.

we have so little clarity on what is even being measured that i offer miles driven only as a very crude and convenient proxy. certainly not the final limit test. way too coarse.

Expand full comment

Of course not. I never said anything remotely like that. Autopilot is an L2 system. It cannot operate without a human driver.

The primary question here is very simple:

Does the presence of Autopilot (and/or other L2 ADAS offerings) make roads more safe, less safe, or have no meaningful impact on safety.

The data here supports that they are the same or more safe with Autopilot in use. It is not conclusive, and any change in safety in either direction is small and thus harder to detect causally with statistic significance - but there is zero reason to believe the answer is “less safe”. Absolutely zero.

Expand full comment
author

my essay was mainly about whether we can get to L5 without a paradigm shift, and i think the answer to that is no. L2 also needs some work-and a great deal more data clarity- but was not my emphasis.

Expand full comment

That’s a separate matter. For one thing , L5 is a poorly defined and largely unattainable goal. A strict interpretation is an *undesirable* goal, because it says the system will operate in all circumstances where a human can - but humans often drive when they should not (e.g. in a blizzard).

The only interesting goals right now are L3 and L4. The former is “conditional automation”, where there is still a human driver, but they can text or watch a movie or otherwise stop monitoring the road. However, they must be present and alert, and ready to take over from the system when given sufficient warning (e.g. when exiting the mapped highway, when weather gets worse, etc).

L4 means there is no human driver, and covers both cars where you can drive manually but when the system is engaged you can go to sleep or not be in the driver’s seat, as well as robotaxis and cars without steering wheels or pedals. L4 handles the entire driving task with no human driver fallback. But it has limits - e.g. a geofence, weather, time of day, etc.

Tesla’s approach is very likely to offer both L3 and L4 modes in the coming years. I feel confident they will not go beyond L2 with current hardware, but they will soon introduce their “hardware 4.0” suite with a new computer, upgraded cameras, and an imaging radar. With this, I think it is inevitable that they will offer a safe, effective L3 mode in the next 1-2 years, and L4 capabilities in 2-4.

Expand full comment

I think Tesla will solve Full Self Driving, or get close enough for it to be useful. Musk is unreasonably optimistic but I put that down to just a marketing ploy. Musk's latest statement about Tesla being worth "zero" if it doesn't achieve FSD will only be true if other car companies attain FSD and Tesla doesn't. That seems pretty unlikely. Perhaps this is his way of saying that Tesla is way ahead on FSD (even if it isn't) and, therefore, he wants to make FSD the basis on which cars and car companies are valued.

I totally agree that the data should be more abundant and public. If they are going to drive on our roads and endanger our lives and property, they should be forced to make all this data public. I don't follow the field closely but I'm hoping such legislation is in the works.

FSD will only succeed if the public can be made to tolerate different failure modes of FSD compared to human drivers AND FSD is better in overall death and accident rate compared to human drivers. I suspect it will get there. FSD will continue to make stupid mistakes but, hopefully, they never make the same mistake twice because the incident can be immediately incorporated into their training.

Expand full comment

what does the rate look like for human drivers? Since that's the obvious benchmark for this software.

(obviously we don't have complete data on the software so it's annoying to make the comparison)

Expand full comment

Full self-driving suffers a long-tail problem. Even if it becomes statistically safer than manual driving, even to a substantial extent, vivid images in the news of the inevitable spectacular accidents that occur due to an AI misfire will assume outsized importance in people and politicians’ minds. It will join the ranks of other rare events like terrorist attacks and nuclear power plant accidents that juice up our fears in disproportion to their likelihood. Marketing is going to be tough and perhaps impossible.

Expand full comment

I agree we may still be far from Level 5 autonomy in rain and snow, etc.

But the article is very one-sided and appears to imply that Teslas are more dangerous than normal cars, when in fact the opposite is true.

Here’s just one article: https://www.google.ch/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjovqy5ubH4AhUFg1wKHUNpCFYQFnoECA8QAw&url=https%3A%2F%2Felectrek.co%2F2022%2F05%2F27%2Ftesla-owners-less-likely-crash-than-their-other-cars%2F&usg=AOvVaw1nPleT3bEN6v5bWAyY2Isk

A fair comparison would be to look at accident statistics in similar circumstances (e.g. highway) per 1 million miles driven with Tesla Autopilot vs. cars without driver assist. I am sure you will find that Tesla Autopilot is in fact much safer than the average human driver. The data would also have to be cleaned from instances where Autopilot was overridden by the driver.

The interesting thing about above quoted study is that the drivers are actually the same, i.e. there is no grounds to suspect bias that Tesla owners are on average more responsible drivers.

Expand full comment
author

where do i “ appears to imply that Teslas are more dangerous than normal cars.”?

Expand full comment

It’s the general ‘feel’ I got reading it. Probably not intended by the author but it’s how I read it. The following might have contributed to that:

1) “ So we can conclude that Tesla has had roughly 3 accidents roughly every four 4 days.” - no attempt to compare this to any other car (e.g. Toyota) adjusted for fleet size

2) “…number of incidents […] would likely jump radically if we took humans out of the loop altogether“ - No, I would argue that the incidents would be lower …even with current level of self driving capabilities as the articles I shared show.

3) “… especially when nonconsenting human beings become unwilling participants.” - Implies that these nonconsenting humans are put at a higher risk than would otherwise be the case. But this is not so, as the risk of an accident is significantly lower already with current technology.

Expand full comment
author

I *am* the author and it is not how intended it. the essay is about whether we are close to L5; we aren’t

Expand full comment
Jun 16, 2022Liked by Gary Marcus

I know you are. And I like your substack. That’s why I subscribed.

Thank you for engaging and keep it up.

Expand full comment

I didn't get that sense from this piece. Teslas are in fact some of the safest cars out there, according to the National Highway Traffic Safety Administration itself

Expand full comment

Great read! and thanks for the footnote... it's baffling that the country that invented all those unnecessary warning labels allows the name "autopilot"

Expand full comment

It’s an apt name. In aviation is basically means “cruise control”, though obviously more advanced auto pilots exist today than when the concept was first introduced. Auto pilots in aircraft do not replace pilots, and do not make aircraft autonomous. Everyone knows this, too, because there’s always a pilot (and a co-pilot) on every commercial airline flight. In the aviation world, pilots are taught that “the automation doesn’t fly the plane, you fly the plane through the automation”. The same is true for Tesla’s Autopilot system.

Now, the “Full Self Driving” and “FSD Capability” naming is another matter. I have issues with that. Less so for the beta, but very much for the non-beta L2 functionality they now sell under that name.

Expand full comment

I understand what you're saying, and fully agree with your second paragraph. But I'm not so sure about how widely the aviation autopilot meaning is understood by the public when applied to Tesla.

Agree to disagree I guess, ¯\_(ツ)_/¯

Expand full comment

I think you get a bit mixed up between FSD (the beta test of full self driving which is of course nothing of the sort) and “vanilla” autopilot (assisted driving on highways). ADS crashes are part of the second not the first. Autopilot is available on nearly all Teslas. This is a not and attempt to normalise and calculate accident data: https://engrxiv.org/preprint/view/1973/3986

Expand full comment

Also, we don't need to give up - a better alternative might be to instrument the roads instead, and use the collected data to route vehicles - the cars need to only be intelligent enough to be driven around, rather than actively making their choices. But even with this, outliers will remain a problem. Just like train and tram tracks and surroundings are off-limits by law, this would unfortunately entail outlawing everything that would make it unsafe for the not-self-driving vehicles.

Expand full comment
Jun 15, 2022·edited Jun 15, 2022

"Will Tesla ever “Solve” Full Self Driving?

And will it survive if it doesn’t?"

Yes, of course it will survive. The reason is that Tesla is not a philosophy company but an engineering company. They are *great* engineers. What I have observed is that they take whatever existing science and engineering offers (as in the lousy batteries that existed when the company was started) and engineer professionally -- taking advantage of scientific breakthroughs (often created by others) and incorporating them as well as making their own improvements.

I am very critical of most technology products including the hardware/software on appliances and automobiles. However, I was overwhelmed with how well the Model 3 I have been using for 2+ years worked -- the hardware and software worked!! It actually worked!! (Unlike just about all my other experiences with software designed for appliances and autos.) I obtained Full Self Driving a month ago (I'm in Canada and the rollout was slower here) -- wow. It works great... well as great as to be expected for a deep learning system, and it will get better. But given what exists in deep learning technology (which somehow the entire lay and scientific world confounds with "AI") it is a masterful piece of engineering.

What will happen with outliers? What any good engineering company will do -- they will see in the coming years that there are alternatives to deep learning for producing real AI and they will purchase/copy/incorporate/etc the technology and their cars will use causal logic to deal with 99.9999% of the outliers and their cars will have close to perfect full autonomy. There is little logical reason this will not be the outcome in the coming five years :)

Expand full comment

Instead of having my grown kids taking my car keys away in a few years, I’m looking forward to FSD, summon, and voice commands. If there’s a margarita maker plugged into a USB all the better. Based on publicly available data, Teslas are the safest cars on the road per unit distance driven, and more so when employing driver assist.

Expand full comment

Let's say we did fully solve self driving, as defined as self driving cars being affordable to the masses, and as safe or safer than human driven cars.

That's not automatically a good thing just because the technology works and many benefits are delivered. With any technology the fact that the technology delivers benefits is largely useless information on it's own. What does matter is the relationship between the benefits and the price tags. Does the technology provide a net benefit to society as a whole when all factors are considered?

Fully self driving vehicles would put tens of thousands of truck drivers and Uber drivers out of work, and probably lots more people that don't come to mind at the moment. We need to consider the pain delivered to these people, and the pain that they may deliver back to society in return. If large numbers of people are left behind by technological progress they can conclude they have little to lose, and they may start taking undesirable radical action like voting for hyper confident con men promising to "make America great again".

What's often not considered when discussing technological advances is that a critical mass of people need to be able to keep up in order for progress to proceed safely. If too many people are left behind too quickly, we can't count on them to go quietly die in some run down dumpy trailer park.

We all go forward more or less together, or we probably don't go forward.

Expand full comment

We humans drive on account of our growing up in the world, having experienced gravity, acceleration, other people, obstacles, bumps, slips, hazards etc etc etc. Every SDC ever, attempts to fake such knowledge by using data to classify what's going on. As with other kinds of ML, it's fakery. The devil is in the details. We humans will do the 'right thing' every time in an unusual situation, unless we are incapacitated (drunk, sleepy etc). The machine would have no clue about a situation it hasn't encountered in training. In other words, unknown unknowns are not a problem for us, but they are, for SDCs. Each new case will lead to yet another accident. The bulk of the world doesn't look like sunny CA with blue skies and well marked traffic lanes and well behaved pedestrians etc. Musk said that in 2020, an SDC would drive itself from NY to a customer in CA! It is beyond misleading, it's dangerous and irresponsible - to express/promise such baseless things.

Expand full comment