a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by kingmudsy
kingmudsy  ·  3319 days ago  ·  link  ·    ·  parent  ·  post: Why Self Driving Cars Must Be Programmed to Kill

My thoughts:

I wonder if you'd be able to allow the user to set a parameter that defines this behavior? Obviously you can't account for the innumerable scenarios that the car is going to face, but you could still decide the degree of self-sacrifice you're willing to make. The car can be self-driving, but it seems like this would be a good way to preserve the autonomy of the user through the automation.





veen  ·  3319 days ago  ·  link  ·  

Setting aside that I don't think a car will ever have such a slider, giving people the option to set a slider from "selfish" to "Gandhi" doesn't seem like a great idea. Why would you give a user, ignorant of the actual algorithms used by an extremely complex device, control over something like that other than to make them feel better?

I don't think this is as big of an issue as these researchers are making it out to be. I have this post saved for a reason. What this issue boils down to is that the real world doesn't work like the philosophical, moral world that is posited by articles like these. They never ask real-world questions, like, when road accidents actually occur. Ever heard of the Swiss Cheese Model by James Reason?

The idea is that bad actions and errors will happen, but that there are layers of defense that prevent it from happening most of the time. Say, for example, your driver is inattentive and oversteers on a highway. Will it turn into an accident? Likely not - highways are very wide, so over- or understeering is not a big deal. When you do go over the road markings, you can often hear a sound from the tires, making sure inattentive drivers know they're crossing the line. Even if they do go over the line by quite a bit, the driver can jerk the steering wheel back (the last safety net on the right, the defence mechanism).

If an accident happens, it means that multiple things have gone wrong, at the same time - the 'holes' can be passed through in the cheese. Only when all of the big safety nets fail, when you can draw a straight arrow through all of the holes in the cheese, does an accident occur.

There are two major categories of 'things that can go wrong' here: human errors and unsafe actions. At both, autonomous vehicles will be better, as they can greatly reduce the amount of errors and are increasingly good at preventing unsafe actions. For an autonomous vehicle to have to make a decision of 'kill 1 passenger vs kill 10 others', it needs to not be able to see the dangerous situation way in advance, it needs to have literally no room to divert to, on a road where danger can happen under the speed limit and those 10 others need to be in the only place the car can steer its momentum to. In the real world, that means two things: either the car is buggy / broken (computer error), or those ten people are doing something extremely dumb (unsafe action). The former scenario means Google will pay, the latter will mean that Google has gigabytes of data to prove how dumb they were because of all the sensors in the vehicle.

Look - the goal of autonomous vehicles is to drive the same as a human driver, but safer. How often does a moral dilemma like the trolley problem happen right now? How often have you made that decision, or have you pondered about it during an unsafe situation? Never, or almost never. Just because actions are written down in code doesn't mean ethics need to be coded as well.

user-inactivated  ·  3319 days ago  ·  link  ·  

    Setting aside that I don't think a car will ever have such a slider, giving people the option to set a slider from "selfish" to "Gandhi" doesn't seem like a great idea. Why would you give a user, ignorant of the actual algorithms used by an extremely complex device, control over something like that other than to make them feel better?

How would you feel about a dummy slider? One that let's people think they're in control of the car's behavior, but in fact does nothing?

Drylandfish  ·  3319 days ago  ·  link  ·  

I understand Volkswagen has a job offer for you.

user-inactivated  ·  3319 days ago  ·  link  ·  

It's not a new idea.

veen  ·  3319 days ago  ·  link  ·  

I have that on my motherboard. A slider from 'power-saving' to 'balanced' to 'performance'. I don't notice a damn difference. While that's not such a big deal for motherboards, it's downright insulting when it comes to ethical issues. So you're giving me an option that might influence whether I live or not, and you don't even take it into account?

user-inactivated  ·  3319 days ago  ·  link  ·  

I dunno man. To be honest with you, this whole concept just feels so above me I don't know how to even begin thinking about it.

veen  ·  3319 days ago  ·  link  ·  

That's part of the problem - that an engineering problem can be lifted to a philosophical issue. It doesn't really make sense. It's like paying your Uber driver more to kill others instead of sacrificing yourself. Theoretically that might be possible but the real world doesn't work that way.