78 Comments

I agree with every point you made above. The whole thing is indeed a farce. Also, all the talk about progress being made toward solving AGI that we hear coming from generative AI experts and AI executives is pure snake oil.

Deep learning (generative AI) will never get us closer to AGI regardless of their undeniable usefulness in some applications. It is a giant step backward for humanity's quest for AGI in my opinion. It sent everyone chasing after a red herring. It sucked all the funding out of research efforts on alternative AI models.

AI research should be focused primarily on systematic generalization. Without fully generalized perception, the corner case problem that plagues both self-driving systems and LLMs cannot be solved. Deep learning is based on objective function optimization which is the opposite of generalization. It is useless for AGI.

The deep learning era will soon come to an end. It's time to look under a different lamppost.

Expand full comment

Remote assisted driving could be a nightmare. I have recently had solar panels fitted and the user statistics is handled remotely when the internet and the remote computer is not overloaded. Imagine driving at speed along a motorway and all communications are lost (might even be a major solar flare disrupting satellite cover). ANy safe system must be self-contained.

Expand full comment

I thought there had been only 2 AI Winters. My mistake.

Wikipedia lists quite a few more. https://en.wikipedia.org/wiki/AI_winter

Anyone feel a chill?

Expand full comment
Nov 22, 2023·edited Nov 22, 2023Liked by Gary Marcus

Many of the people posting in the comment section for this story sound like you have programming experience in the field of engineering automated automobiles.

How many of you have ever worked as a driving professional in an occupation like cabdriver (mostly on surface streets) for at least one year and >40,000 miles, or as an over the road truck driver for at least one calendar year and >80,000 miles? In all sorts of traffic and road conditions, lane closures for maintenance work on roads and bridges, hazardous weather, at all hours, urban, suburban, rural, and freeway routes, with no traffic infractions or accidents?

( The "professional" part is important, because it implies driving for many hours day after day, whether you feel like it or not; and running someone else's miles, not on your own preference or schedule, sometimes navigating routes that you've never driven, to places you've never been, under driving conditions that you'd rather not have to contend with. And, if you stay out there long enough, eventually encountering unpredictable circumstances and unusual, unforeseen challenges. Also, nobody lasts in the business unless they're safe drivers with good driving records. Insurance, you know.)

If you're members of a team, how many of the team members have that amount of hands-on competence, doing the work professionally?

Failing that, how many have professional driving experience that approaches the amount specified in my first question?

I really would like to get at least one reply to those questions, whether affirmative or negative.

Expand full comment

Gary, remote assist is nothing new and that is well known within the AV sector. Everyone uses it - and in the initial ‘learning’ phases, do so quite intensively. Contrary to the leading premise of this commentary, remote assist is neither a rumour, nor is it a surprise. Waymo, for example, spoke about it openly many years ago. Presumably Waymo (which has been much safer than GM) relies on remote assist much less now - they may have published info about that, I don’t know.

In the distant or impossible future when full Level 5 becomes commonplace and trustworthy, humans will still be in the loop to deal with special situations, emergencies, etc.

And, AVs don’t need to be perfect. They just need to significantly improve on the 42,000 traffic deaths per year in the US (2021 statistic).

Expand full comment

Maybe it's a good occasion to recall that Toyota's approach is precisely a driver assistance system, which was pesented in 2019 https://spectrum.ieee.org/ces-toyota-lifts-veil-from-driver-assist-system.

Essentially, the car avoids the driver from doing incorrect movement or making wrong decisions.

Time will tell if this 'modest' technological approach is the correct one for specific urban frameworks.

Expand full comment
Nov 5, 2023·edited Nov 6, 2023

The problem of self-driving cars has three sides, technical, economic, and social acceptance.

1. Technical: It's best to dissociate discussion of self-driving cars from "AGI". AGI is a red herring. Self-driving cars are a specialized form of AI that relies heavily on ML and huge amounts of data, but is nothing like LLMs. Self-driving car AI lacks general reasoning and world knowledge and all that, but that doesn't prevent them from being very effective in the majority of relatively routine driving conditions, which is actually quite big and diverse. They are probably safer than the average human driver in average conditions. Lacking a human mind, they perform differently---and often worse---in the long tail of unusual situations. No surprise there, there are different types of intelligence, with different strengths and weaknesses. The shape and tractability of the tail can be known only from real-world deployments.

2. The long, fat, tail of non-routine situations is being addressed with human remote monitoring and assistance. Hooray! Autonomous vehicle companies are smart to over-staff this function for three reasons: (a) collect data to push the tail back; (b) be conservative about safety and disruption; (c) meet peak demand loads. Whether this is economical or not in the long run depends on the learning rate; driving conditions in which the vehicles are deployed; and safety/disruption/cost tradeoffs. We are in violent agreement (along with industry analyst Brad Templeton) not only that the public deserves transparency here, but that it is in the companies' interest to provide it. It is way-premature to make a call that these things are not economically viable. The savings to society from getting bum human drivers off the road are monumental. The gamble that investors are making is that this saving can be harvested. That's in addition to the benefits from increased options for mobility.

3. While the AI-technology and economic calculations are churning away, the biggest challenge now is social acceptance. LLM alarm and skepticism have helped to push public sentiment against AI overall. Deceptive over-promotion of "full self-driving" hurts as well. Every mishap will make the news in a way that human-caused car tragedies do not. This is just how the collective mind works, so management of expectations is paramount. In our society, fortunately, the best strategy is transparency. Then, at least the debates could be based on well-established, open facts instead of us having to finely parse spin found in New York Times articles.

Expand full comment

"No fantasy about AGI thus far has survived contact with the real world." ain't that a fact.

Maybe it should read "No conviction about around-the-corner AGI thus far has survived contact with the real world." Human convictions are funny things, and are quite resistant to reasoning and observations. What — hopefully — the crash of AGI-expectations is going to bring us is some proper attention to the 'fantasies' we constantly have about them. AI might teach us above all a useful lesson about 'human intelligence'.

Expand full comment

"No fantasy about AGI thus far has survived contact with the real world." - statement of the decade :)

Expand full comment

I very much agree that self driving cars continues to feel further off than people imagined. LLMs have tons of training data, but cars don’t have much, and taking similar strategies will struggle there. So I guess it makes sense they’re collecting a ton of data (from local or remote driver assist). TBD whether that gets us anywhere...

But that said, I’m not sure the goal of self driving was ever “AGI capable of doing any task”, are you refuting someone who said that self driving cars would give us AGI?

It also sounds like you’re saying transformers are the cause of hallucinations. I had always assumed it was the “next token prediction” of positive-only dataset examples in pretraining that gives it the confidence in its hallucinations. Can you share more about why you think the transformer is fundamentally at fault and will lead to hallucinations if used in self driving AI? Would be curious how you make that connection

Expand full comment

My p(doom) for AGI-in-my-lifetime just went up.

Seems more and more likely that AI is well down the transformer off-ramp, about to get seriously lost, unable to find its way back to the main road. Or to strain yet another metaphor, AI winter is coming and it looks like a long one.

If the consumer base tires of unreliable-to-deadly, over-hyped "AI solutions," funding into AI research (especially private sector funding) is likely to dry up.

My best hope for AGI in my lifetime is a dramatically increased one (which will more likely come from highly targeted ML models than say, an LLM magically curing cancer at the behest of a well-crafted text prompt.)

Expand full comment

Waymo has much better product than Cruise (see https://www.understandingai.org/p/driverless-cars-may-already-be-safer and other posts of this author) The "rumors" you mentioned about frequently calling the call center may not be true for Waymo.

Expand full comment

Could you elaborate on how AGI is a requirement for self-driving?

By the way, I think the bulk of driving doesn't need L5, with L4 it would be largely enough...

About Waymo, I have an anecdote. Some years ago I went to a Faculty Summit at Google's HQ representing my university. As a part of the program, Larry Page talked with us for an hour or so, with no specific points to discuss; "ask me whatever you want," he said.

During the conversation, Larry recounted how he took a PhD topic with this advisor, Terry Winograd. It went as follows: Terry proposed Larry to take one of two projects:

- One of them was to develop self-driving cars;

- The other was to investigate the structure of the internet with the goal of improving search.

After agonizingly pondering the two projects for some days, he decided to take the second one, with the results that we all know.

But in the back of Larry's head the self-driving idea somehow stayed.

Years after Google made many millions in profit and started to diversify, the self-driving project came back to life under Waymo.

One interesting bit about the approach Larry wanted to take in self-driving is that he preferred to skip assisted driving altogether. That's why some early Waymo prototypes didn't have steering wheel at all.

There you have the anecdote. Of course, it doesn't tell much about how self-driving will end. We all can agree that the difficulty of self-driving in real life was HUGELY underestimated.

This week I'll publish my take on where self-driving is heading as a Medium post.

Expand full comment

Gary, I haven’t seen you comment on the Waymo/Swiss Re study and what it says about Waymo’s system?

Expand full comment

This is my hypothesis: Prob(Developing quantum computers that can do something useful in our lifetime) ≥ Prob(String theory will ever be experimentally verified) = Prob(Developing level 5 autonomous cars) = 0 > Prob(Developing a AGI system) >> Prob(Developing a ASI system).

Expand full comment
Nov 10, 2023·edited Nov 10, 2023

"Remotely-assisted" driving? What...?? You mean someone +100 miles away is helping you to steer your car for you, like in a video game? And we're supposed to accept this new paradigm as a substitute for the advertised hype that is L5? This has to be a joke. Or at least a smoke and mirrors magic trick at the publics expense. Forget frustration, my emotions are turning to anger at the seemingly sheer dishonest propagandist skullduggery of it all...

Expand full comment