On self-driving cars... having autopilot in my car (not the full self-driving feature, I like driving too much to give it up to an algorithm) is nice, and I use it when I feel like taking it a little easier, but I never take my eyes off the road, or my hands off the wheel. First, I don't trust the system as much as I trust myself (being a total car girl, who drove a manual transmission Honda Prelude for 18 years. Especially love curvy mountain roads).
Second, and this is the key in this discussion: paying attention to a car that's driving itself, whether on partial autopilot or FSD, is more stressful than doing the driving yourself. Why? Because a part of your brain is having to anticipate possible errors which could lead to potential accidents or just plain discomfort (for example if the car misreads a shadow and breaks hard). It really is like teaching your teenager to drive. You know that they know the basics of driving, but it's the practice and real world road experience they lack. With AI software, there is no "lived experience." It's just software, which is still not as good as your human eyes, still not as good as your human instinct, still not as good as your human driving experience.
Bottom line: if I'm driving, which at this point is second nature to me, it's a (mostly) enjoyable experience. If I have to watch someone (something) else drive and make sure they don't kill me or anyone else, that gets stressful and exhausting real fast.
There may of course be another explanation for why the CNET AI articles were barely touched by an editor: he/she deliberately left them as they were to try and kill the project.
Bird-brained pigeons are far more vigilant than people. A pretzel maker stationed people to flag mangled pretzels emerging from an extruder. They'd last about 20 minutes, whereas pigeons lasted all day. Air-sea rescue found the same thing. Pigeons were far more reliable spotting orange life vests in a vast sea of blue than people. Musk justified "full self driving" if it resulted in a lower rate of accidents than for human drivers, which is probably the case. So long as we hold up humans as paragons of virtue, humans will continue to be victimized by other humans.
It amazes me that anyone with understanding of human nature would think that humans would hold onto a steering wheel and monitor the performance of a self-driving car. It's easily seen as a task more difficult than actually driving and far more dangerous. After all, not having to pay attention while getting from A to B is the main promise of self-driving cars.
Reading this post also reminds me of the automation in modern airliner cockpits. I'm a regular follower of Mentour Pilot. a popular YouTube channel (https://www.youtube.com/@MentourPilot) run by an actual 737 pilot. His videos examine how a pilot interacts with the automation and how mistakes occur. The autopilot takes care of the tedious details of flying the plane but when it needs the pilot's attention, it spits out error messages and alerts. A pilot generally has a lot more time to react to problems than the driver of a car but many airplane accidents are caused by the pilot not reacting properly to the alerts.
It seems like this would be hopeless in a car. There's just no way a driver is going to be able to understand why the car's autopilot needs them to take over fast enough to avoid trouble. In order to understand the problem fast enough, the driver is going to have to duplicate all the cognitive tasks performed by the autopilot. In other words, the car's autopilot is useless unless it can do the whole job.
Thanks for another clear article on a very important subject. I, for one, am grateful that you continue to sound the alarm bell on the seriousness of the problems created by automation technologies. We are in dire need of published guidelines and possibly new laws to deal with them. Personally, I believe that all self-driving vehicles should be banned on public highways until such time that they can prove their reliability. I also believe that authors and other creators should be legally required to warn their audience that what they are reading (or observing) was wholly or partially generated by a machine.
I fully, FULLY support this type of disclaimer. I don't see how it should or could ever NOT be required. A part of me wants our pre-ChatGPT world back :P
Driving assist tech should vigilantly monitor the vigilance of the driver it is assisting. Fairly easily done with a driver-facing camera watching the direction of gaze. In production for several years in some of the more well-designed cars (German).
Is the way AI is being developed and released really OK?
After two of Boeings new 737 Max jets fell out of the sky killing all passengers on both planes, all 300 737 Max jets were grounded worldwide. And they stayed grounded until the problem was identified, fixed, and thoroughly tested to ensure the planes were safe for humans to use.
But the tech/AI world seems to believe it is OK to release, advocate and use tech that is already proven to give not only wrong responses and actions, but in some cases dangerous ones. When real live humans are being used as living crash test dummies for tech, things can, and have, gone horribly wrong. But this seems to be OK to many in the tech world.
But it gets worse, as you pointed out, the stakes are higher for AI in legal issues, where laws are complex and not based on pure logic or any defined and predictable rational or structural patterns for AI to learn from.
Health though is an entirely different world since it is regarded as both an art and science. There are places where science AI will excel and be invaluable in health applications. There are other places in the art of health where AI, as currently being designed, would be profoundly dangerous. Unfortunately for us real humans, there is no clear line, or in many cases, there is not any line we can see where the science of health ends and the art of health mental and social well-being start or overlap.
As things are, there will be no way for human users of AI to know it is safe for humans to use, until it is too late. Is this OK?
Anyone done a psychological/societal study of the impact of technologies that erode human agency? The Attention Problem is testament to the fact that human beings need to remain engaged lest we turn ourselves into mindless zombies. I love the act of driving, it's one of the few freedoms I feel I have left in an increasingly marshalled world. The larger question for me is not whether we can trust a technology, it's whether we truly need it.
For driving, the government should centralize all accident data, externally and internally captured and synthesize many times related perturbations of the accident (speed, vehicles, lighting, weather, slope …) as well as capture, and synthesize normal driving. No autonomous driving software could be fielded without passing this growing test set at a high rate.
It would be funded by a vehicle tax that scales with the accident rate of that vehicle.
This is fascinating to me - the idea that we can't be competent editors if we aren't competent creators, and by relying on technology to create we lose that ability to create and subsequently edit that technology. It's a very insidious cost to rapidly advancing technology and one that subverts the goals of capitalism if reckoned with. But of course it won't be. This is a somewhat related article related to gps/sailing that I think about a lot: https://www.nytimes.com/2016/03/20/magazine/the-secrets-of-the-wave-pilots.html
There are a lot of examples of signs, menus, etc being printed that were translated into a foreign language but with their equivalent of "translator server error" or "i am out of the office this week and will respond to your translation request when i return", which is a sign of the same sort of complacency around computer-generated text.
The best argument for driverless cars may be what incredibly bad drivers humans so often are. The next time you're on the highway take note of how many drivers are tailgating at 70mph, as if the interstate was a NASCAR race. I doubt driverless cars will ever be reliable, but as compared to what?
a related thing when it comes to vigilance/automation is the "Irony of Automation" paper, actually from '83 but very relevant and insightful -- see eg https://blog.acolyer.org/2020/01/08/ironies-of-automation/ . It shares a certain bleakness of outlook with this post
If our attention always 'wanders' - then how can we drive several hours. Or are we misjudging our ability to maintain focus?
What if an important AI application is to "expand" our attention - but we have to define the limits.
My GMC flashes in a heads-up display if I'm approaching something too fast. It vibrates if I might back into something. It will help me stay "in lane" on the highway - although I don't use it, too unreliable (seems to depend on highway markings). Some cars now will apply the brakes before hitting something.
But in every case the driver is in control.
Linking "Complacency and overtrust." to this phenomenon is a really good idea. It drives home the point that AI has its limits - just like people.
Bottom line - I'm not sure how anyone could trust an AI appliance that was trained in the cesspool of the internet.
On self-driving cars... having autopilot in my car (not the full self-driving feature, I like driving too much to give it up to an algorithm) is nice, and I use it when I feel like taking it a little easier, but I never take my eyes off the road, or my hands off the wheel. First, I don't trust the system as much as I trust myself (being a total car girl, who drove a manual transmission Honda Prelude for 18 years. Especially love curvy mountain roads).
Second, and this is the key in this discussion: paying attention to a car that's driving itself, whether on partial autopilot or FSD, is more stressful than doing the driving yourself. Why? Because a part of your brain is having to anticipate possible errors which could lead to potential accidents or just plain discomfort (for example if the car misreads a shadow and breaks hard). It really is like teaching your teenager to drive. You know that they know the basics of driving, but it's the practice and real world road experience they lack. With AI software, there is no "lived experience." It's just software, which is still not as good as your human eyes, still not as good as your human instinct, still not as good as your human driving experience.
Bottom line: if I'm driving, which at this point is second nature to me, it's a (mostly) enjoyable experience. If I have to watch someone (something) else drive and make sure they don't kill me or anyone else, that gets stressful and exhausting real fast.
There may of course be another explanation for why the CNET AI articles were barely touched by an editor: he/she deliberately left them as they were to try and kill the project.
Industrial safety didn't come without labor action. AI safety won't either.
Bird-brained pigeons are far more vigilant than people. A pretzel maker stationed people to flag mangled pretzels emerging from an extruder. They'd last about 20 minutes, whereas pigeons lasted all day. Air-sea rescue found the same thing. Pigeons were far more reliable spotting orange life vests in a vast sea of blue than people. Musk justified "full self driving" if it resulted in a lower rate of accidents than for human drivers, which is probably the case. So long as we hold up humans as paragons of virtue, humans will continue to be victimized by other humans.
It amazes me that anyone with understanding of human nature would think that humans would hold onto a steering wheel and monitor the performance of a self-driving car. It's easily seen as a task more difficult than actually driving and far more dangerous. After all, not having to pay attention while getting from A to B is the main promise of self-driving cars.
Reading this post also reminds me of the automation in modern airliner cockpits. I'm a regular follower of Mentour Pilot. a popular YouTube channel (https://www.youtube.com/@MentourPilot) run by an actual 737 pilot. His videos examine how a pilot interacts with the automation and how mistakes occur. The autopilot takes care of the tedious details of flying the plane but when it needs the pilot's attention, it spits out error messages and alerts. A pilot generally has a lot more time to react to problems than the driver of a car but many airplane accidents are caused by the pilot not reacting properly to the alerts.
It seems like this would be hopeless in a car. There's just no way a driver is going to be able to understand why the car's autopilot needs them to take over fast enough to avoid trouble. In order to understand the problem fast enough, the driver is going to have to duplicate all the cognitive tasks performed by the autopilot. In other words, the car's autopilot is useless unless it can do the whole job.
Excellent point. The 'time to failure' in a car on the highway is very short. But a 737 at altitude has time to correct.
Does this make sense?
Maybe 'time to failure' is a definable parameter that provides a boundary for applying an AI agent?
100% agree, and thanks for the link
Words to live - or die - by…
Thanks for another clear article on a very important subject. I, for one, am grateful that you continue to sound the alarm bell on the seriousness of the problems created by automation technologies. We are in dire need of published guidelines and possibly new laws to deal with them. Personally, I believe that all self-driving vehicles should be banned on public highways until such time that they can prove their reliability. I also believe that authors and other creators should be legally required to warn their audience that what they are reading (or observing) was wholly or partially generated by a machine.
I fully, FULLY support this type of disclaimer. I don't see how it should or could ever NOT be required. A part of me wants our pre-ChatGPT world back :P
Were you ever able to find the Fridman trash article? https://drive.google.com/file/d/1f5kmObWBt8ES3V6ST-ZnDDyRwQHB-Bja/view?usp=sharing
Driving assist tech should vigilantly monitor the vigilance of the driver it is assisting. Fairly easily done with a driver-facing camera watching the direction of gaze. In production for several years in some of the more well-designed cars (German).
yes, that should absolutely be a requirement!
Is the way AI is being developed and released really OK?
After two of Boeings new 737 Max jets fell out of the sky killing all passengers on both planes, all 300 737 Max jets were grounded worldwide. And they stayed grounded until the problem was identified, fixed, and thoroughly tested to ensure the planes were safe for humans to use.
But the tech/AI world seems to believe it is OK to release, advocate and use tech that is already proven to give not only wrong responses and actions, but in some cases dangerous ones. When real live humans are being used as living crash test dummies for tech, things can, and have, gone horribly wrong. But this seems to be OK to many in the tech world.
But it gets worse, as you pointed out, the stakes are higher for AI in legal issues, where laws are complex and not based on pure logic or any defined and predictable rational or structural patterns for AI to learn from.
Health though is an entirely different world since it is regarded as both an art and science. There are places where science AI will excel and be invaluable in health applications. There are other places in the art of health where AI, as currently being designed, would be profoundly dangerous. Unfortunately for us real humans, there is no clear line, or in many cases, there is not any line we can see where the science of health ends and the art of health mental and social well-being start or overlap.
As things are, there will be no way for human users of AI to know it is safe for humans to use, until it is too late. Is this OK?
Anyone done a psychological/societal study of the impact of technologies that erode human agency? The Attention Problem is testament to the fact that human beings need to remain engaged lest we turn ourselves into mindless zombies. I love the act of driving, it's one of the few freedoms I feel I have left in an increasingly marshalled world. The larger question for me is not whether we can trust a technology, it's whether we truly need it.
For driving, the government should centralize all accident data, externally and internally captured and synthesize many times related perturbations of the accident (speed, vehicles, lighting, weather, slope …) as well as capture, and synthesize normal driving. No autonomous driving software could be fielded without passing this growing test set at a high rate.
It would be funded by a vehicle tax that scales with the accident rate of that vehicle.
This is fascinating to me - the idea that we can't be competent editors if we aren't competent creators, and by relying on technology to create we lose that ability to create and subsequently edit that technology. It's a very insidious cost to rapidly advancing technology and one that subverts the goals of capitalism if reckoned with. But of course it won't be. This is a somewhat related article related to gps/sailing that I think about a lot: https://www.nytimes.com/2016/03/20/magazine/the-secrets-of-the-wave-pilots.html
There are a lot of examples of signs, menus, etc being printed that were translated into a foreign language but with their equivalent of "translator server error" or "i am out of the office this week and will respond to your translation request when i return", which is a sign of the same sort of complacency around computer-generated text.
see: https://languagelog.ldc.upenn.edu/nll/?p=787 & https://languagelog.ldc.upenn.edu/nll/?p=11907
The best argument for driverless cars may be what incredibly bad drivers humans so often are. The next time you're on the highway take note of how many drivers are tailgating at 70mph, as if the interstate was a NASCAR race. I doubt driverless cars will ever be reliable, but as compared to what?
Perhaps. But I would suggest that experience is a much better argument for more public transit.
a related thing when it comes to vigilance/automation is the "Irony of Automation" paper, actually from '83 but very relevant and insightful -- see eg https://blog.acolyer.org/2020/01/08/ironies-of-automation/ . It shares a certain bleakness of outlook with this post
If our attention always 'wanders' - then how can we drive several hours. Or are we misjudging our ability to maintain focus?
What if an important AI application is to "expand" our attention - but we have to define the limits.
My GMC flashes in a heads-up display if I'm approaching something too fast. It vibrates if I might back into something. It will help me stay "in lane" on the highway - although I don't use it, too unreliable (seems to depend on highway markings). Some cars now will apply the brakes before hitting something.
But in every case the driver is in control.
Linking "Complacency and overtrust." to this phenomenon is a really good idea. It drives home the point that AI has its limits - just like people.
Bottom line - I'm not sure how anyone could trust an AI appliance that was trained in the cesspool of the internet.