Could Cruise be the Theranos of AI? And is there a dark secret at the core of the entire driverless car industry?
We don’t know, but here are some questions regulators should ask
Major, major scoop in The New York Times last night, by Tripp Mickle, Cade Metz, and Yiwen Wu, about Cruise, a driverless car company that GM bought for what was rumored to be a billion dollars in 2016.
Buried halfway into the article was a stunning revelation. Roboticist Rodney Brooks was perhaps the first to note its significance:
Certainly no commercial product could ever work at a profit if you needed remote operators anything like that often. As Brooks points out, the term “autonomous” barely applies. We all knew that remote operators existed (Cade Metz describes this in Episode 3 of my podcast Humans versus Machines, for example), but I personally had no idea that remote operators were this frequently involved; for me it casts a radically different light on what’s going on, if the cars need to be babied that much.
Beyond what Brooks pointed out, the story also notes “Those vehicles were supported by a vast operations staff, with 1.5 workers per vehicle”. Cruise has been presenting its vehicles as autonomous, in hopes that they someday would be, but right now it looks like they are anything but.
Fitting with this general vibe, a source (that in fairness, I don’t know well) just told me that his impression having visited with them not so long ago was that “they're definitely relying on remote interventions to create an illusion of stronger AI than they really have”.
I have to be honest, all this has me thinking about Theranos.
Elizabeth Holmes wasn’t necessarily wrong to think that someday humanity will figure out a way to do medical tests quickly and automatically from few drops of blood, but in hindsight she didn’t really have a clue about how to make it happen on any clear schedule, even though she pretended she did. Maybe what she aspired to was theoretically possible, but what she was delivering wasn’t what she said it was, and she didn’t know how to deliver what she promised. Both customers and investors were fooled.
§
Cruise may or may not turn out to be the same kind of story, with the same kind of collateral damage. With a very important asterisk I am about to explain, if Cruise’s vehicles really need an intervention every few miles, and 1.5 external operators for every vehicle, they don’t seem to even be remotely close to what they have been alleging to the public. Shareholders will certainly sue, and if it’s bad as it looks, I doubt that GM will continue the project, which was recently suspended.
Here’s the asterisk; the New York Times presumably wouldn’t have run those numbers if they weren’t confident in them (after all the legal liability would be enormous), but we don’t know yet exactly what those the numbers refer to.
It is at least possible that the numbers aren’t as bad they look. Maybe Cruise is just being super fussy and super careful and the interventions are for minor things that don’t really matter. How much is that direct driving? What else is going on when they intervene? Maybe (though I am skeptical) the cars would be fine in real world on their own. We really need to know what the teleoperators are doing, and why, before passing judgement. But it’s starting to feel like if the cars were actually left to their own devices, all hell would break loose.
And, now that the issue has been raised, well, I realize there’s been almost zero public disclosure of how any of the driverless car companies are using remote operators. It’s not just Cruise. We don’t have any idea how much any company is relying on these operators, what they are doing, or how well the cars would perform without these human crutches. As safety expert Missy Cummings said to me this morning, remote operators could well be “the dark secret of ALL self-driving.”
Human lives at are stake.
The State of California should demand answers immediately, and share them with the public.
Update: A few hours after I posted this Cruise CEO Kyle Vogt essentially confirmed that their “driverless” cars need very regular human intervention:
Gary Marcus has been expressing skepticism about driverless cars since 2016, and wrote about them, and other challenges in building trustworthy AI, in his 2019 book Rebooting AI, with Ernest Davis..
'Snake oil' keeps ringing in my mind. Those who keep insisting that LLMs are a step closer to AGI are selling the same snake oil, in my opinion.
Generative AI, regardless of its utility, is a step backward in the search for AGI. It has almost nothing in common with intelligence as we observe it in humans and animals. AGI will not be found under this lamppost. We need new models, new theories.
I wonder if Gary's too young to remember Spinvox, 20 years ago in the early mobile phone boom, who claimed to have technology that could transcribe voice recordings (eg voicemail messages) to text,
and burned through $100m of investor cash
and, it turned out, had never made it past the "secret call-centre-style companies of human transcribers" stage...
Plus ca change!