a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by reguile
reguile  ·  3130 days ago  ·  link  ·    ·  parent  ·  post: [Annotated] Why Cities Aren’t Ready for the Driverless Car

    how do we convert human driving intelligence into machine driving intelligence?

Why are we trying to convert human driving intelligence into machine form? Why not allow these machines and algorithms to learn the best courses of actions to take to reduce accidents without trying to impose things like "you should hit the car instead of the bus of school-children"?

You've made a system that learns how to best manage situations, you don't turn around and undercut the decisions this system makes in order to appease people proposing absurd hypothetical scenarios.





briandmyers  ·  3128 days ago  ·  link  ·  

    Why not allow these machines and algorithms to learn the best courses of actions

Two reasons - machine intelligence is still a long way from being capable of human-level reasoning. Most of what we have now (that works well) is very simple-minded, brute-force approaches to reasoning (i.e. expert systems). Second reason - we don't NEED human-level reasoning to make good progress in self-driving. As an example, a huge first step in this process happened long ago, and has hardly any innate intelligence at all - the automatic transmission.

reguile  ·  3128 days ago  ·  link  ·  

Most self driving systems today are using neural networks, I don't think expert systems would be able to manage the complex issues that pop up with driving.

As a result, I don't think we could really understand all the nuances that these systems are capable of knowing once they have been trained for so long to drive. It's easy to create a network that learns to do a task relative to learning how the completed system is thinking.

These aren't brute force approaches anymore, there is a system that is learning as time passes, and getting better at what it does as time continues forward.

As such, these networks have one job, and do that one job very well. In this case that job is to prevent crashes.

Fiddle with that network to impose artificial limitations and you impose on a system optimized to do something, and more crashes will result in the long run. Although I'm sure there are cases where things go wrong with the program, or things need tweaked, these aren't the same as directly interfering with the car when it decides to take a course of action that could lead to hitting a schoolbus vs a normal car. It may well be that hitting the schoolbus causes less total harm for some reason, and we be sure that we understand the reasoning of the machine before we decide to mess with it.

briandmyers  ·  3128 days ago  ·  link  ·  

    Most self driving systems today are using neural networks

No offense intended, but colour me skeptical - can you support this? I believe there's a lot of research into neural networks for pedstrian identification etc, but I've never seen any indication that (for example) Google Chauffeur uses neural networks at all. Again, they appear to be throwing brute-force at the problem.

[edit] Details about self-driving software are hard to find; however, I did find this little hint (from this article http://www.scientificamerican.com/article/autonomous-driverless-car-brain/ )

    Yet as smart as today's cars may seem, they are cognitive toddlers. In a car brain, software, processors and an operating system need to run algorithms that determine what the car should do, and these decisions must be made quickly.
reguile  ·  3128 days ago  ·  link  ·  

Nvidia thinks so:

http://www.nvidia.com/object/drive-px.html

    In a car brain, software, processors and an operating system need to run algorithms that determine what the car should do

Within a journalistic article like this, neural networks more than fit this definition. They are, after all, just a bunch of big arrays with a bunch of weights and activation functions.

veen  ·  3128 days ago  ·  link  ·  

Any system that assists or replaces driving is a system that converts a bit of human intelligence into machine driving intelligence. I mean, we can have a discussion about semantics and heuristics but that is what I meant with that statement. The interesting challenge to me is how we do this. Is that what you're alluding to? There is an interesting debate going on about whether automated systems should assist or replace human drivers. I recently read an interesting article that proposed a human-machine interaction with the 'horseman analogy', where your car has rules and can take some basic actions on its own (like a horse), and you as the horseman have the final say and decide on the general course.

reguile  ·  3128 days ago  ·  link  ·  

I assumed you were referring to the idea of machines needing to know "human morality" in order to replace human actions, and was saying that machines shouldn't have our morality imposed on them, they should be allowed to come to decisions naturally based on the learning algorithms they are built with.

I ultimately think human drivers should be entirely replaced by machine drivers. The horse-driver analogy is interesting, but ultimately it defeats the purpose of the self driving car. I think it makes a good stopgap, though.