I'm inclined to agree, though I wonder if there could ever be a situation where courts would decide that the person using the software is innocent.Whoever is legally responsible for the software, I would assume.
There will be a test case and it will be f'ing bizarre. Culpability, as I understand it, comes down to agency and intent. Those three words are only together because I've hung around enough law students to pick up a little by osmosis. Whippin' out that august body of legislative wisdom, Wikipedia: So whatever test case we're gonna find, it's gonna have to be a fully autonomous event that somehow isn't negligent. How 'bout... Teen hacker creates a Roomba plugin ('skynet 9000') that causes the celebrated line of robotic vacuums to chase any movement it detects (cats, etc). However, the hacker programs a failsafe that stops the rogue roomba when its collision sensors trigger. iRobot pushes a software update that uses a different communication pathway between the collision sensors and the drive motors. As a consequence, Roombas running Skynet 9000 no longer stop upon bumping up against their targets, as Maude Smith discovers when her Roomba 'Mr. Finster' mauls her morbidly obese chihuahua 'Tiddlypoop' when his short little legs and corpulent little belly prove no match for Mr. Finster's murderous rage, courtesy some goofing around by her 13-year-old son Timmy. Mrs. Smith sues iRobot for the software update. iRobot argues that Mr. Finster was running malicious code. Mrs. Smith's attorneys point to iRobot's Create program and argue that Skynet 9000 was not dangerous to canines until they pushed their update. Mr. Finster ran on a schedule - there was no agency that set him in motion. Timmy didn't intend for the dog to get mauled - he overcame some hurdles to accomplish his task but that was before the software update. iRobot's only culpability is in whether or not their software should have been so locked down that nobody could hack it, and whether or not they should have been cognizant of all unofficial patches. I could see a court ruling that nobody was at fault there (see "moral evil" vs. "natural evil"). The question is what kind of precedent it would set.A person is culpable if they cause a negative event and
(1) the act was intentional;
(2) the act and its consequences could have been controlled (i.e., the agent knew the likely
consequences, the agent was not coerced, and the agent overcame hurdles to make the event
happen); and
(3) the person provided no excuse or justification for the actions
This is pretty much how I understand the culpability issue. Most of the problems with automated vehicles will be accidents, and most of the situations will see the blaim to be put on the company making the automated vehicle. Automated vehicles will try their best to avoid accidents, and will make judgements just like a human does (the intent part of culpability). The question is whether those judgements are good or bad ones. If there is a clear misjudgement that could have been changed by changing some code, than the car (and the company) is to blame. When the car couldn't have done it any better (aka best intentions), it's not to blame. If you were to fall in front of an automated vehicle going 100, no, it's not to blame. There are already automated vehicles in positions where they could hurt people. ASI is a company which automates mining and farming operations. Out of interest I've emailed them who they think is responsible when causing damage, and I've asked them for a license or terms of use for me to read. Let's hope they respond!
You know you're a geek when you want to read TOS and licensing agreements for automated mining equipment for fun. I'm gonna bet they've got boilerplate that says "no matter what happens, no matter where it happens, no matter how it happens, no matter why it happens, no matter when it happens, it isn't our fault." The question then becomes the enforceability of said boilerplate.