55 Comments

'Snake oil' keeps ringing in my mind. Those who keep insisting that LLMs are a step closer to AGI are selling the same snake oil, in my opinion.

Generative AI, regardless of its utility, is a step backward in the search for AGI. It has almost nothing in common with intelligence as we observe it in humans and animals. AGI will not be found under this lamppost. We need new models, new theories.

Expand full comment
Comment removed
November 4, 2023
Comment removed
Expand full comment

There is no incremental path from LLMs to AGI that I can see. We need a new model of intelligence. But don't let me discourage you. I'm sure LLMs can be improved at whatever they do.

Expand full comment

I wonder if Gary's too young to remember Spinvox, 20 years ago in the early mobile phone boom, who claimed to have technology that could transcribe voice recordings (eg voicemail messages) to text,

and burned through $100m of investor cash

and, it turned out, had never made it past the "secret call-centre-style companies of human transcribers" stage...

Plus ca change!

Expand full comment

I was just going to mention this example, too, see http://news.bbc.co.uk/2/hi/technology/8163511.stm. Spinvox, too, claimed that humans were merely helping in the early days of training the underlying tech.

Cruise, Theranos and Spinvox are perhaps further illustrations of wishful thinking pumped up by financial greed. Founders may actually start out with careful claims ('We will provide XY service and during the first decade that service will be powered by humans while we investigate the potential of AI for this task.") and are then tempted - by investors and potential customers - to 'think big' and turn the claim upside down ("Our AI will do this task for you automatically, much more cheaply and efficiently than humans ever could, though in the initial stages we *might* rely on some human help to train our golden goose.").

I honestly am not trying to be cynical here, just observing a behaviour pattern in the startup world that I also experience, anecdotally. Very hard to resist the sirens calling...

Expand full comment

Gary, I kind of get the sense that the driverless vehicles is like the "Turin Test" of the modern era. If only we can get a driverless car that does not run into things or kill anybody can we then finally say the car has an intelligent driver indistinguishable from a human driver? It would be a more fun thought experiment if not for the possibility of me getting ran over by one of these things lol.

Anyway, true autonomy, without any human intervention, may be further away than many companies have implied. If Cruise vehicles require frequent remote support even for test operations, full autonomy without humans may not be achievable in the near future.

"True autonomy" would not involve any human intervention or remote operator support during vehicle operation. The vehicles would be able to drive themselves without any human involvement, oversight or assistance. Could we agree on this definition?

The vehicles' artificial intelligence and capabilities would be sufficient that they do not require frequent "babysitting" or taking over driving duties by remote humans, as Cruise's vehicles seem to need according to the report.

The vehicles would be able to safely and reliably handle all driving situations and tasks without humans needing to co-pilot or effectively drive portions of routes. Any remote operator interventions would be very rare exceptions, not frequent occurrences.

Further, if we had arrived at true autonomy, companies would transparently disclose if and how much remote operator assistance is involved, rather than potentially obscuring technological limitations through human workarounds. Progress towards autonomy would be demonstrated through reduced reliance on humans over time, not maintained status quo involvement of remote operators behind the scenes. This would be legally sane and as a way of auditing them in some way.

Remote operators may effectively be "co-piloting" the vehicles and taking over driving duties in many situations. This calls into question whether the vehicles can drive themselves without human involvement.

The driverless car industry overall may be overstating progress made towards full autonomy without acknowledging the role of remote operators in keeping vehicles moving safely. We lack transparency about how much human support is actually needed. So far to me it looks like a mirage; it would appear that some vehicles can operate by themselves but only if we ignore the heavy babysitting programmers and the driver have to correct.

Companies are using remote operators to mask technological limitations and create the illusion of more advanced capabilities than have actually been achieved. This could mislead customers, investors and regulators about the true abilities. Not a good selling point and quite frankly fraudulent, if I might be so bold.

Expand full comment

True autonomous driving that is as good or better than humans in the full range of conditions will probably require something much closer to AGI. We do not have an inkling of how to get there, though many people have faith that we will. LLMs might be one of thousands more steps. The problem is the corner cases, the same ones where humans often fail, the unforeseen. You can solve 99.9% of the problem, but that's not nearly good enough. A system needs judgment which means understanding, which means something much closer to AGI.

Expand full comment

As for the challenges of developing AGI:

AGI requires the ability to learn and adapt to new information.

AGI requires the ability to reason and make decisions in complex situations.

AGI requires the ability to understand and interact with the world in a meaningful way.

And, I suspect, AGI requires you to do all the above in tandem. That is quite a juggling act.

The challenge of corner cases is very real. Autonomous driving systems need to be able to handle a wide range of situations, including some that are very rare or unusual.

LLMs are a powerful tool, but they are not sufficient for achieving true autonomous driving. LLMs need to be combined with other technologies, such as computer vision and robotics, to create a truly autonomous system. Perhaps even enhanced by satellite technology which would help the machine to increase the scope of monitoring for road conditions and threats beyond the small parameter of the vehicle it is attempting to drive.

Some specific examples corner cases that would be difficult for autonomous driving systems to handle, and I argue, MUST be satisfactorily resolved:

A child running into the street in front of a car

A car that suddenly swerves into another lane

A road that is blocked by debris or construction

When I consider the above examples certainly emotional responses kick in for me. That kind of emotional thinking raises stress levels which increases alertness to avoid the threat. Something I do not think a machine is capable of producing. It is just a cold calculating machine that at best may be trained to know if a tiny human suddenly cross their path, or another car does something unexpected to traffic rules, or some obstruction. But emotion sure helps us humans to either prep for a potential disaster or quickly adjust when we are faced with it.

Expand full comment

So I was thinking further about the emotional processing that goes into risk-aversion in driving. What do you remember most about drivers ed if you took in high school? For me it wasn't mostly the dry lectures on the technicalities of street signs and such but the over-the-top gruesome films we watched. People flying through windows with their guts hanging out and some sheriff saying "I never unbuckled a dead person at the scene of an accident!".

But of course this cannot work with AI. It is all hardcoding cold 1s and 0s and hoping you have eliminated all the bugs that would accidentally lead to the bot driving causing an avoidable accident.

But for us that kind of emotional engagement is pretty important for most youths because it drives home the point that "Hey, this is a machine you are operating and it can easily get you and others killed if you misuse it. Especially at a time when young people start driving and some get a little disillusioned that they are mostly invincible.

A close encounter with an accident, or even a near-miss, can be a powerful motivator for self-correction in the future. The development of autonomous driving systems inevitably involves some risk of accidents. It is therefore important to define a threshold for acceptable risk before deploying these systems on public roads. This threshold must balance the potential benefits of autonomous driving against the potential costs of accidents.

So when the questions becomes: If it is possible for ADS to become fully auto-pilot how many accidents are we willing to tolerate before we get there. On the other hand, if we expect it eventually fail in achieving that ultimate goal how many accidents are we willing to tolerate before we are forced to call it quits?

Expand full comment

It seems Kyle Vogt confirmed the numbers from the NYT article. Cruise are so doomed, and if similar numbers apply for the other AV developers it would be a catastrophe for the industry, I think.

https://www.reddit.com/r/SelfDrivingCars/comments/17nyki2/kyle_vogt_clarifies_on_hacker_news_that_cruise/?rdt=33497

Expand full comment

This too was incredibly surprising to me. And to many NY Times readers, based on the comments. At face value, it suggests the economics are still not there - at all - for these vehicles. The company will be under great pressure to disclose what these interventions are. To be their devils advocate, it could be that the vehicles have an intentionally sensitive warning system to which the appropriate human override is usually to click “proceed as planned.” Indeed, this could be a plausible strategy for acquiring fresher and fresher training data, if the data from these interventions are incorporated in subsequent training cycles. If so, I do think the company should only include unassisted driving tests in its safety statistics.

Expand full comment

I find it funny that you suggest this is the 'dark secret at the heart of the entire driverless car industry'.

In my book The Future Normal, I profiled Robert Flack, the CEO of Einride, an autonomous electric freight startup in Sweden. He's very vocal about how full autonomy is some way off, if ever achievable – and indeed they made a big play about hiring a remote truck operator who manages a 'pod' of multiple trucks, ready to take over to handle last mile complexity.

Claiming / aspiring to full autonomy was a choice, not a requirement. This is why we need diversity within the industry.

Expand full comment

From the NYT story: Cruise's human teleoperators "frequently had to do something to remotely control a car after receiving a cellular signal that it was having problems." We need regulatory investigations and private litigation to unpack the details of this. "Dangerous when used as intended" is what trial lawyers dream of.

Expand full comment

Well this explains why they can't work when phone networks are clogged. Thanks Gary; I skipped over Rod's tweet, but then came to your blog off of Patrick Lin's Facebook, which I only saw because the EU's DSA forced Meta to put back in recommenderless newsfeeds...

Expand full comment

If this is true, Cruise may also have been under-reporting interventions to the DMV.

Which it *looked* like they were, just comparing stories in SF Chronicle to the DMV's statistics on driverless vehicle events. But possible that there is some wiggle room in reporting requirements that means this was superficially OK.

Expand full comment

1.5 operations staff per vehicle is another excellent example of the idea that automation is frequently not about replacing human labor entirely as it is about reconfiguring the labor process on terms more favorable to capital.

Expand full comment

The labor to capital ratio is going to have to get a lot better than that to justify their hype. If you count all their engineers they are probably much lower than 1.5 people per car and may have a less efficient labor process than my local cab company

Expand full comment

That was useful, as it contains a Cruise reply. And it seems that purely for human assistance, each car needs about 2 minutes per hour. Giving peak demand, that means the number of 1.5 per car seems a tad high and includes all staf (including cleaning and stuff). So Cruise might be able to deliver a successful service. It’s just not the ‘fully autonomous’ *self* driving car everybody has in mind when discussing this. I am reminded of Meta’s Cicero which did well in blitz Diplomacy games, and was impressive engineering/optimisation within a dead end.

The Cruise model might be something we will see more. Humans as constant fall back for dumb automation. High pressure jobs with split-second decisions in the case of Cruise.

Expand full comment

For those interested in the technology, strategy, and policy of self-driving cars, one of the best thinkers and writers is Brad Templeton. He has two recent fair-minded, balanced posts about the Cruise situation. More than technology, the most serious problem autonomous vehicle makers have nowadays is societal acceptance.

As information develops about the reported rate of human interventions, I would look to Templeton to boil down the facts and the implications.

His blog is called Brad Ideas.

https://4brad.com/

Expand full comment

The most serious problem autonomous vehicle makers have nowadays is societal acceptance? Are you kidding? Society would accept them if they were good drivers. They are not. A good driver must have generalized intelligence and common sense. Autonomous cars have very little of that.

Expand full comment

OMG, that name brings to mind, rec.humor.funny - a pre-Web joke group :)

Expand full comment

I am not really surprised that driver-less cars needs some human assistance. At the technology development or test stage it is quite normal. And it would be reasonable that remote assistance would be maintained for the full operational stage and even if the cars become more reliable.

Expand full comment

Your missing the quantitative data; that’s what is shocking

Expand full comment

It depends on the derivative, if they have x remote personnel per vehicle but that x is going down with time at a reasonable rate, they are in business.

Today's x number doesn't matter much, if it's on the path to be 0.1 in five years and 0.01 in ten years it looks great.

Expand full comment

Any system does that information into function transformation as driving is. And needs a conscious observer for this to comply to the changing circumstances that are unpredictable and follow traffic rules. That'a a universal principle:

Meaning - Information - Function - Learning ->

Knowledge - Consciousness - Understanding - Memory ->

Intuition - Thinking - Sensing - Feeling ->

DNA - RNA - Protein - Signal cascades ->

Nucleus - Wave - Quantum - Interaction ->

Protons/Neutrons - Photons - Electrons - Chemical Bonds ->

https://docs.google.com/presentation/d/1VCjOHOSostUrtxieZvOjaWuTNCT59DMF/edit#slide=id.p1

Expand full comment

Keep in mind, they primarily operate in SF where the average commute distance in the city is just a few miles. In addition, what's the break down in intervention reason? From what I've read, it is mostly to help the vehicles even they get stuck (vehicle doesn't know what to do so it's just at a standstill). While an inconvenience for others, it's not a safety issue (e.g. a human making sure the vehicle stops for a pedestrian). In short, the intervention rate needs to be contextualized.

Expand full comment