Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That report ends by saying essentially, "it may not be possible to prove the safety of self-driving cars". [1] So the value here is questionable and the same logic could apply to anything with a low frequency of occurrence. The value of air bags by this measure was not proven until they were already mandated.

[1] "even with these methods, it may not be possible to establish the safety of autonomous vehicles prior to making them available for public use"



The difference, of course, is that an airbag can't take control of a car and run someone over.

More to the point, the report notes that new methods to determine the safety of self-driving cars are required.

Which the industry is not exactly falling head over heels trying to develop.


The report also says that these hypothetical new methods may not be able to prove safety. It’s not a straightforward problem. How do you prove that you’ve reduced (or at least not increased) a problem that occurrs so infrequently?

Realistically no one will trust “new methods” and establishing their relevance is really difficult. I would imagine that most of these companies are running lots of simulations, because why wouldn’t you? But how many people will see that and trust it more than data gathered on the road?


Indeed, there are very few reasons to trust simulations to tell us anything about safety in the real world.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: