Just over two years ago, I had a few discussions on here about self-driving cars which led to me writing a blog post about Google's car, which led to an article I wrote getting published for a Dutch platform called The Correspondent. (It's basically, Pando but better.)
I'd say writing that article is what solidified my interest for the subject. Since then, I've written by bachelor's thesis on the subject, have been invited to a few meetups and have started my master's degree specifically at the university with the best experts in the field. Next year I'll likely do my master's thesis with one of those experts.
This summer I decided to spend my free time researching one of the aspects of self-driving cars that I found difficult to grasp: machine ethics. Much inspired by kleinbl00's rant, I wanted to delve deeper into the topic and explore just why the trolley problem and the issue of machine ethics for self-driving cars kept coming up.
De Correspondent - "Does your car need a conscience?".
It's in Dutch, but a proper English translation is in the works. Google Translate works a-okay, even though it reads like it's written by a caveman.
________________________
My argument roughly goes as follows. First, I present an example of the kind of dilemma that you so often find in articles about self-driving cars (AVs) and ethics. An AV is driving down a twisty mountain road. At the same time, a girl crosses the street but trips and falls over. The car comes driving down a blind corner at the same time. It can save the kid, but only if it drives you off the cliff to a certain death.
Hypothetical dilemmas like this are basically misleading. Traffic doesn't work like that. First of all, difficult traffic scenarios are much more complex and have a lot of factors that come in play. The weather, people, and surroundings are inconsistent and can be unpredictable. The car needs to drive in a complex traffic system. What such a dilemma does is strip away all the parts that make up that system so that only the moral choice remains. It basically assumes nothing else matters, which is not the case at all (no matter what James Hetfield would like you to believe).
So we need cars to drive in this complex traffic system, and we need the end result to be ethically justified. Does this also mean that a car needs to think ethically? Ideally, you'd have the car making a moral assessment of the value of each life. However, this is practically impossible: you can't get all the information you need (e.g. age of the people around you) without some serious privacy breaches. Even if you do have that info, there is no reliable and generally agreeable way to weigh those factors.
Moral rules-based solutions have also been put forth. This is also really difficult to do because the traffic system is so complex. To every moral rule you can think of, there is a situation imaginable that forms an exception to the rule. And for exceptions, there are also exceptions to the exceptions. We humans use our common sense and moral intuition to know when to follow or when to break a rule, but machines have neither.
So then what can an AV do? Well, at its core is a fairly simple process. First, it measures its surroundings. Then, it tries to classify and interpret those measurements (e.g. that tree-like object is probably a tree). It then tries to find out what is moving and will try to predict where those moving objects will go. Finally, it will plan its path where the main goal is to avoid hitting any of the objects.
Note how there is no ethical step in this process. This is because an AV doesn't need a conscience to produce ethically right results. It can do just fine with those four steps, and technicians should focus on making those steps better. If the car becomes more and more reliable, it will be able to detect, predict and drive around nearly all dangers in its path.
That doesn't mean it will be perfect at it. The real question then is who to hold responsible for getting the car to drive ethically. The fatal accident with the Tesla recently shows how difficult this can be. On the one hand, the driver wasn't paying attention and drove the car too fast on the road. On the other hand, the Tesla didn't see a truck coming. This is mainly because it only has two sensors for detecting dangers and both failed.
Besides Tesla and the driver, there are more parties that share some responsibility. This is because of the complex environment that a car has to drive in. You can think of the NHTSA, who has approved the Autopilot system on the road. Or Consumer Reports, slamming Tesla down to protect consumers. With AV accidents, liability cannot reasonably be put on one party. There is a division of blame, and we need to think about how to properly balance this. If we wait for companies to balance this, they can tip the balance in their favor. How this is done is not yet clear, but it is in our own interest to start thinking about it, now that we can still steer the technology in the right direction.
________________________
This post is partly to share what I've done, partly to thank you guys for motivating me. Thanks.
(it's also partly to share the amazing artwork they made for the article. Like, damn, it fits so perfectly.)