If you hold developers responsible, you can kiss self driving cars goodbye.
What should be passed (but I can't see how) is a percentage of allowed deaths, at least in the early years, and set it to something like 5-10% of the current rate, reducing downwards to 1% after 20 years.
People will die from self driving cars, and undoubtably their will eventually be a case that is 100% the self driving car's fault. The benefit of self driving cars comes from the mistake being permanently fixed, while with human drivers it can be committed over and over again.
There needs to be some kind of protection on the companies (and obviously the developers, I've never heard someone try to say they should be held responsible before) from lawsuits. Otherwise all it'll take a is a small handful beefore companies will just let it die.
> Civil engineering and medical device manufacturing seems to be doing fine, despite having similar principles of engineers' liability.
The idea that software engineers should be held responsible for something that (as far as we can tell so far) was an accident and not the result of negligence or malice is several orders of magnitude beyond the level of liability that civil engineers and medical device manufacturers have.
> that software engineers should be held responsible for something that (as far as we can tell so far) was an accident and not the result of negligence or malice
Nobody said that. The original comment said developers should be held "accountable for a crime that if committed directly by a person would almost certainly result in jail time" [1].
The standards from medical devices and/or civil engineering, with the associated licensing requirements and verification processes, make sense. Even in the case of a careless mistake or strategic oversight, individuals who could have known but nevertheless signed off should be identified, if not explicitly punished.
> > that software engineers should be held responsible for something that (as far as we can tell so far) was an accident and not the result of negligence or malice
> Nobody said that.
Well, they quite literally did, because the original comment in this thread was:
> We need to discuss how the developers self-driving cars will be held accountable for the crimes they commit. There is no reason the person who programs or the person who makes money from a self-driving car should be held less accountable for a crime that if committed directly by a person would almost certainly result in jail time. You can’t replace yourself with a computer program and then choose to take only the benefits and not the responsibilities.
I guess you can quibble about the difference between "accountable" and "liable", but that's not a discussion that's particularly interesting to have here, especially given OP's other comments in this thread which make it quite clear that this is what they had in mind.
The quibble in this case would be the meaning of "developers." In the case of a medical device, the developer is considered to be the Manufacturer, not the specific software developers on the team. Considering how often teams change, etc., using the latter definition would be meaningless.
An act doesn’t become okay because two people (the executive and developer) and a robot are now responsible instead of one. What is your justification for the sort of utilitarian calculation you’ve made here? Why do you assume self-driving cars will be safer without any evidence?
If we are going to be arguing from a utilitarian standpoint, suppose we hold the executives of self-driving car companies as responsible as if they were themselves drivers. Then if self-driving cars truly are safer as you optimistically claim, both fewer people will die from accidents involving them and fewer people will go to jail for those same accidents. Seems like a win to me.
Why is it at all quite obvious? How is arguing for being careful “such a stupid argument it’s really just not worth anyone’s time to entertain”?
Someone in this discussion has an insane amount of blind faith in technology which here literally killed a pedestrian, and it’s not the people who are arguing for just consequences.
Are you arguing that a machine does not have better reaction times than a human being? Are you arguing that a machine can fall asleep, drink and drive, panic in a high stress situation?
Aren't you the same person who called for holding the developer liable for writing software with a bug? Are you accusing the developer of promising something that is impossible (not hitting a pedestrian in a crosswalk?) or simply implementing it wrong?
It's worth pointing out that we have no idea yet who is at fault in this accident. It could easily be someone who simply walked out in front of traffic when they weren't paying attention.
"Are you saying X" is a pretty aggressive way to frame your argument.
The above poster seems pretty clear that it is NOT obvious that cars will necessarily drive safer than humans on average, in the same way it is NOT obvious that we will ever have General Artificial Intelligence.
These are very complicated problems, and the machines are currently (significantly) worse than human drivers, so I think it's fair to question the argument that "everything will work out eventually"
The answer to your first “question” of course is that it depends on the machine, how it’s built, programmed, and the context of operation. Machines can have much faster reflexes, or they can freeze.
Theoretically it could be otherwise, perhaps, though the human brain has an extremely parallel pattern-matching engine honed by about half a billion years of evolution.
Realistically, the self-driving system will be made of layered distinct components that all add latency. This is how we build both hardware and software. An image is sensed, it gets compressed, it gets passed along the CAN bus, it gets queued up, it gets decompressed, it gets queued up again, object detection runs, the result of that gets queued up for the next stage... and before long you're lucky if you haven't burned a whole second of time.
Machines can drive aggressively.
There was a university that had self-driving cars do parallel parking... by drifting. Driving along, the car would find a parking spot on the other side of the road. It would steer hard to that side, break traction, swing the rear of the vehicle around sideways through a 180-degree turn, and finally skid sideways into the spot. The car did this perfectly.
That kind of ability is something that I personally don't have. I would consider a self-driving car that could do this. If I'm paying, and that kind of driving is my preference, I expect to get it.
I really don't want you to get your wish. We have no need to invest in flashy self-driving car stuntmen, building a car that can get you from A to B safely and in a reasonable time frame is all that we should be aiming for.
That sort of drifting parallel park might work most or nearly all of the time, but if the road conditions are poor and the car loses handling then it will be a lot more risky.
The camera feed going straight to the neural network will not have a lot of latency. The neural net will not take very long to process the image and make a decision. Humans need at best half a second and at worst several seconds to recognize, process, and act. These systems are designed to be fast to responds. They do not have a second of latency.
> What should be passed (but I can't see how) is a percentage of allowed deaths, at least in the early years, and set it to something like 5-10% of the current rate, reducing downwards to 1% after 20 years.
Whith so few self driving cars that number sold be zero. If you can't assure safety with a few cars whith a human as backup, you should not be in the streets. And it's not the first dangerous accident of an Uber self driving car where Uber was at fault.
What should be passed (but I can't see how) is a percentage of allowed deaths, at least in the early years, and set it to something like 5-10% of the current rate, reducing downwards to 1% after 20 years.
People will die from self driving cars, and undoubtably their will eventually be a case that is 100% the self driving car's fault. The benefit of self driving cars comes from the mistake being permanently fixed, while with human drivers it can be committed over and over again.
There needs to be some kind of protection on the companies (and obviously the developers, I've never heard someone try to say they should be held responsible before) from lawsuits. Otherwise all it'll take a is a small handful beefore companies will just let it die.