Rethinking “driverless cars”
Essentially every conversation about “driverless cars” over the last decade has to be rethought — with important implications as well for “AGI timelines”.
We used to think that there was a fundamental two-way distinction between self-driving cars that truly drove on their own (L5) and driver-assisted cars where a human driver sat in the driver’s seat (driver assist), constantly overseeing what was happening.
Turns out Cruise and (per rumors) others have really been working what we should call “remotely-assisted” driving: the car does *some* of the work, but frequently calls a call-center (aka remote op center). That’s basically just driver assist, but with the assistant outside the car, rather than inside.
This is not true autonomy, but semi-autonomy. It might (I doubt it) save money, but it has literally nothing to do with AGI.
So there is a three-way distinction:
Self-driving
Driver-assist
Remotely-assisted driving
It appears that most or all the work at present is much closer to the latter two, and that all of the published numbers are really about the latter two—with no disclosure about how much remote centers are contributing to whatever results we see for efforts at putative self-driving.
I don’t know if anyone has EVER published clean, detailed remote-assist free data on self-driving, ever, because regulators were not demanding info on remote assist. There’s no way whatsoever ever to assess the true import for public safety without such data. The whole thing is a farce without it.
Some things that follow
If all the “self-driving” efforts are leaning heavily (eg hourly) on call-centers, true self-driving is a LONG way away. Anyone who thinks we are close to true Level 5 self-driving has probably been deceived.
Essentially NONE of the comparisons we have heard between humans and machines have looked at cars in which humans were not playing some role in the driving, either in the car or remotely. And until yesterday we were all in the dark about the magnitude of the human contribution. Waymo is probably better, but how much better is anybody’s guess.
Quite possible nobody will turn a profit in remote driving anytime soon; quite possibly everybody is spending quite heavily on remote assist centers, in hopes of achieving true autonomy. Cruise (only place we have any relevant data whatsoever so far) has more support staff than vehicles; for now, anyway, they aren’t saving money, they are burning it.
Remote driving itself is unlikely to work well enough as an endgame on public roads, because data communications just aren’t stable enough. (Latency and loss of communications are both serious problems) It’s too slow to work on highways (where latency needs to be really low),, can contribute to congestion failures in the city as we saw with Cruise, etc.
Uber and Lyft can stop worrying about being disintermediated by machines; they will still need human drivers for quite some time.
The whole self-driving cars industry is this likely to be viewed in a few years as an epic fail, crushed by outliers and AI that could not reason adequately.
Driver-assist (a la Tesla) with humans in the loop may well stay, occasionally causing fatal accidents, given its lack of reliability, but the idea of using your car moonlighting as a self-driving taxi is not happening this decade, maybe not even next.
We may see similar disappointments in safety-critical applications of LLMs (eg medicine), years of promise, but humans still for a very long time required in the loop, true autonomy difficult to achieve.
Transformers (the key technology behind LLMs) won’t get us to Level 5 self-driving; hope springs eternal, but the fact is that LLMs hallucinate a lot. High-reliability is not their forte. and nor are outliers. They won’t take humans out of the loop.
If true self-driving is this hard, despite the amount of data that has been collected and amount of money invested ($100B+), any dreams of AGI happening soon are likely wildly unrealistic.
No fantasy about AGI thus far has survived contact with the real world.\
Gary Marcus has been comparing natural and artificial intelligence his whole life, pretty much since he was a child. Humans have a lot of problems; machines still do, too.
I agree with every point you made above. The whole thing is indeed a farce. Also, all the talk about progress being made toward solving AGI that we hear coming from generative AI experts and AI executives is pure snake oil.
Deep learning (generative AI) will never get us closer to AGI regardless of their undeniable usefulness in some applications. It is a giant step backward for humanity's quest for AGI in my opinion. It sent everyone chasing after a red herring. It sucked all the funding out of research efforts on alternative AI models.
AI research should be focused primarily on systematic generalization. Without fully generalized perception, the corner case problem that plagues both self-driving systems and LLMs cannot be solved. Deep learning is based on objective function optimization which is the opposite of generalization. It is useless for AGI.
The deep learning era will soon come to an end. It's time to look under a different lamppost.
Remote assisted driving could be a nightmare. I have recently had solar panels fitted and the user statistics is handled remotely when the internet and the remote computer is not overloaded. Imagine driving at speed along a motorway and all communications are lost (might even be a major solar flare disrupting satellite cover). ANy safe system must be self-contained.