Tesla’s new autopilot features, which offer a basic autonomous driving feature that can change lanes, follow other cars and react to trouble, rolled out to drivers this week...
Humans being idiots? I can't imagine that. edit: In seriousness, Tesla has hit upon the fundamental problem: there can't be an intermediary step between fully human controlled, and fully automated when it comes to passenger cars. Basic autopilot works in planes because pilots have extensive training, and there is constant monitoring and communication between the pilots and various air traffic controls - Everyone knows, or can know (mostly) where everyone else is in their local sky area. Neither of these things are true of cars. As a result any measures to "bridge the gap" between human and machine are going to fail. Leave emergency measures to the machine? Safer, but it will pull control when drivers "feel" they are still in control, and no one will use it. People will cry nanny state and robot apocalypse. Leave emergency measures to the driver? Increase in accidents, because any driver relying on Autopilot is probably focusing on something else. If you ask someone who is not focused on the road to suddenly take the wheel in an emergency decision, it's gonna be a bad time. Full automation, or full human autonomy, is what I'm saying.
Straight from my thesis: has [regained] the same precision as a regular driver. Yes, it's possible. It is needed, for training purposes. Google takes all the training upon itself, which is extremely expensive as you need to do thousands of miles, all of which monitored by a driver, before you hit a good enough level. Using beta testers makes sense from a technical and business point of view - but it's quite risky from a human factors engineering standpoint.It takes at least four seconds for a person to take over the wheel, at least eight seconds before the driver can respond adequately and it takes around thirty seconds before the driver
Google said that it found the same in its testing. Humans couldn't be trusted not to trust the car too much. That was their reasoning behind removing the steering wheel, (and the brake and gas pedals) in their auto-cars.Somewhat ironically, CA regulators worried it makes them less safe. My guess is that Google is right, and that Tesla, et al. (Tesla's halfway approach is the same that other manufacturers are taking) are wrong.
You've hit the nail on the head here. We saw the same things happen with planes when their autopilot and auto-features first started happening. There is an excellent podcast about the phenomenon and how some car makers are handling it here. Highly recommended. http://www.npr.org/sections/money/2015/07/29/427467598/episode-642-the-big-red-button
Well, even airline automation has its challenges.Basic autopilot works in planes because pilots have extensive training, and there is constant monitoring and communication between the pilots and various air traffic controls - Everyone knows, or can know (mostly) where everyone else is in their local sky area.
This reminds me of a very old joke/hoax going on in some parts of Europe regarding frivolous lawsuits and human stupidity. The owner of a recently purchased RV set the cruise control while on the highway and went to the back the RV to pick up something while he left the cruise control in control. Of course he crashed and then proceeded to suit the manufacturer of the RV for false advertising and damages. Here we are 30 years later with reality surpassing fiction.
I can see why this happened. From a marketing standpoint, "hey wow autonomous cars for free for our existing customer base! Whoopdydoo climb TSLA climb!" From a liability standpoint, "hey, we just told your car to drive itself, now read this longlistofdisclaimersheywaitwhereareyougoing-" Elon Musk does not listen to lawyers.
The thing that gets me is that every obviously AI application I've ever written has taken months to get users to trust it. I try very hard never to say the "a" word when I write one now because I know I'm in for months of "explain this" and "what would it do in this bizarre hypothetical that will never happen and/or cannot possibly happen" if I do. Driving your car, on the other hand? Look ma, no hands! Obviously I need to work on more applications that can kill you.
Watching the video, it looks like there's more to this than the driver being an idiot. The car appears to turn sharply left toward an oncoming car. So while it's undisputed the driver should have had his hands on the wheel, why the car veered toward oncoming traffic at all is still concerning. What would the car have done if his hands were on the wheel? The inevitable comparison is cruise control. When I use cruise control, I'm expected to still be in control of my speed because conditions may change such that my set speed is no longer appropriate. With that analogy and the car set to stay in the lane, the driver expects the car to stay in the lane but may need to take over when such a simplified command is no longer sufficient. The car veering toward oncoming traffic seems analogous to cruise control speeding up uncontrollably.
Do you know what the beeping was in the video? Was it a simple "hands off the wheel too long" alarm or was it indicating autopilot was lost (i.e. your number 3)?
From looking at other videos, it seems to be the alarm to tell the user to take back control. I'm wondering whether it tried to follow the approaching car for a millisecond, immediately realized its mistake and decided to give full control back - at the worst moment imaginable.
I'm willing to bet whatever Tesla's telemetry says about that road, it's less than you or I can see. Also probably wrong. Probably nothing. That's a pretty simple force-feedback control.So while it's undisputed the driver should have had his hands on the wheel, why the car veered toward oncoming traffic at all is still concerning.
What would the car have done if his hands were on the wheel?