88 Comments

The lack of true reasoning ability (as humans’ reason), by EVs or LLMs or any other flavor of AI, is AI’s ultimate limitation. But, instead of recognizing this failure the tech world is now trying to redefine reasoning, claiming that the pattern matching in LLMs is proof of AI/LLMs ability to reason. (As additional proof LLMs ‘hallucinates” just like people do. NOT!) And while LLMs are, at times, accomplishing some remarkable feats, these remarkable feats have been latched on to as proof that it is reasoning. (one LLM did an almost remarkable job of reading and summarizing my book in 90 seconds, it only left out the one core important idea that takes up more pages than anything else in my book because it was something it had never experienced or read about before – is that a bit like a Tesla not seeing a parked jet?) This “reasoning” is at best what we could call a new infantile type of ‘computer reasoning’ which is nothing like how humans’ reason. Human reasoning takes in the whole of a situation, in context, relying on our full senses and faculties, integrating all this diverse information and processing so as to give meaning to what is we see/read/feel/experience and have experienced, and then make an intelligent decision of how to act.

Expand full comment
Aug 19, 2023Liked by Gary Marcus

I live in San Francisco and drive side by side daily w these cars. You make a fantastic argument for all the flag waivers on social media (nextdoor) who insist these are the way to go. Their argument is always that human drivers cause death and destruction. I have zero plans to get in one. I live in an extremely foggy area of the city. An area that once you crest the hill you feel as if you've entered another dimension. Doing this at night can be super stressful as your field of sight is just a few feet in front of you. I can't even imagine how Cruise or Waymo can get a robot car thru that not to mention the fog is here to stay.

Expand full comment

I think the name for the current approach to self driving cars should be called "now I've seen everything". The hope is that by putting in milliions of hours of 'play' like they do with video games, they will get all the edge cases of interest.

Human babies are not trained on billions of carefully honed examples, but rather with small numbers of experiences, often self-created. Moreover, children have an ability, unknown to current machine learning algorithms, to flexibly apply lessons from one area of learning to dramatically different areas

with seeming ease.

Giant monolithic neural networks are do not seem to be exhibiting the kinds of learning performance we require, even with very large numbers of layers and nodes. They still require far more data examples than humans, in order to perform far less capably in general intelligence. I don't think the "now I've seen everything" approach will work for the real world. Instead we must strive design new algorithms which can learn in the extremely parsimonious ways that humans do.

Expand full comment
Aug 19, 2023Liked by Gary Marcus

My dismal analogy for how to make these less-than-perfect self-driving vehicles safer is the conversion of roads and streets to help automobiles: reconfigure the whole world to accommodate them. Curbs to prevent cars driving on to sidewalks, which exist to keep pedestrians off the streets. Stoplights exist to tell pedestrians when it's safe to cross.

Essentially, reconfigure everything to try to eliminate edge cases. I predict the best result will be more efficient parking :-)

Expand full comment
Aug 20, 2023Liked by Gary Marcus

The initiative towards self driving cars might have been a good one, but it appears we've reached a point where it can justifiably be characterised as a scam. If your remit was public safety, all those $$ and brainpower could have been put to far better use. There's no getting round it, edge cases is where it's at. In this regard, I'd still trust a one-eyed human driver lacking depth perception more than I would an AI bristling with sensors.

Expand full comment

Edge cases there will always be. It's the reality of our beautifully chaotic world. It's the very nature of our universe. In fact, an "edge case" is really what's "normal" -- all those moments that take place 24/7, like cars driving, people walking, animals crossing, birds flying across, trees falling into, our streets and roadways and parking lots, those are all edge cases in one sense or another, and collectively they make up what we understand and experience as the real world.

Problem with the engineering mindset is, we want our data nice and clean and predictable. Sorry, that's not the real world — and thankfully so. Personally I would find a predictable, homogeneous world frightfully dull.

I dove into this rabbit hole a few months ago, how curious I also naturally chose autonomous cars to discuss edge cases... https://themuse.substack.com/p/death-by-a-thousand-edge-cases

Expand full comment
Aug 20, 2023Liked by Gary Marcus

Good point about training cars in California sprawl versus an environment like New York City. Driving in New York is very personally interactive, involving a lot of guessing as to the other drivers’ intent, competitive merging contests, and many more encounters with erratic pedestrians, bicycles, and mopeds.

Expand full comment
Aug 19, 2023·edited Aug 19, 2023Liked by Gary Marcus

Hi Gary, excellent post (as always - duh!)...

I'd add this: the unsurmountable limitation of the Physical Symbol System Hypothesis is what the SDC failure is about. Embodied biological beings (eg humans) experience the world (eg cars, roads, weather, traffic etc) 'directly'. It's that simple, that stark. In other words, if an SDC can literally FEEL the bumps on the road (for ex), we'd be on the road (pun intended) to L10M (as opposed to mere "L5") SDCs. Adding more data won't ever fix this, including driving a trillion miles in VR. Why (not)? Because real world > virtual world.

Also, very thoughtful analyses: https://rodneybrooks.com/category/dated-predictions/

Expand full comment

The question we should be asking is, “How quickly can we ban human drivers?” Human drivers kill other humans – a 4-year old girl in her stroller just this week, 37 people in SF and 42,795 people across the US last year alone, Cruise and Waymo, while not yet perfect, have never killed anyone.

The ethics are incontrovertible. Humans must turn over driving to machines.

Expand full comment

Stunning how we must have self-driving cars at any cost. But consider improving mass transit? Not exciting enough. It's like we're trying to live a fantasy rather than solve real problems.

Expand full comment
Aug 19, 2023·edited Aug 19, 2023

Do you not think it's *a matter of time* until driverless are on balance at least as safe as drivers? If you do think so, then what does your distribution of the arrival time look like?

Expand full comment

Just subscribed so copying here an email I just sent to Gary:

Great article - as usual! In a case of pundit me-too-ism, below is an article that I think would interest you that I published in the WSJ in 2018.

https://www.wsj.com/articles/why-we-find-self-driving-cars-so-scary-1527784724

This article reaches the same conclusion, but takes a different tack, which I think should be added to your pile of reasons we aren’t about to get full-self-driving cars: Consumers aren’t going to tolerate a product whose mistakes are not understandable and reasonable.

Here’s the tl;dr:

"how and when they fail matters a lot…. If their mistakes mimic human errors … people are likely to be more accepting of the new technology. …But if their failures seem bizarre and unpredictable, adoption of this nascent technology will encounter serious resistance. Unfortunately, this is likely to remain the case."

Expand full comment

Great article, as usual. And more like it will be needed. The edge case problem is a painful and persistent thorn on the side of the autonomous car industry. If it weren't for politics, no existing self-driving vehicle would be allowed on public roads. To do so is criminal, in my opinion.

The edge case problem is a real show stopper in our quest to solve AGI. There is no question that a truly full self-driving car will need an AGI solution. The current machine learning paradigm is no different in principle than the rule-based systems of the last century. Deep learning systems just have more rules but the fragility remains. AGI researchers should take a lesson from the lowly honeybee. It has less than a million neurons but it can navigate and operate in extremely complex and unpredictable environments. Edge cases are not a problem for the bee. How come? It is because the bee can generalize, that is, it can reuse existing cognitive structures in new situations.

We will not crack AGI unless and until generalization is solved. Based on my study of the capabilities of insects, it is my considered opinion that a full self-driving car is achievable with less than 100 million neurons and a fully generalized brain. Deep learning will not be part of the solution, that's for sure. Regardless of the usual protestations, DL cannot generalize by design.

Expand full comment

Criticisms of self-driving cars are valid. Yet, they have come a long way, and Waymo is doing better than Cruise. They will continue to improve, be cautiously scaled up, and will reduce the number of traffic deaths.

Expand full comment

Edge cases could be solved by large language models reasoning through a possible scenario when confronted with novel situations. Given their increasing performance on zero-shot tasks, I would think that incorporating a fine-tuned language model into the FSD stack is a workable solution.

Expand full comment

I still shudder thinking about how many dead escooter riders we would have had to bury if driverless cars were ubiquitous when the scooters first appeared on streets in March 2018.

Expand full comment