Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have seen no evidence that any autonomous vehicle currently deployed can react faster than an alert and aware human. The commentariat tends to imagine that they can, and it's certainly plausible that they may eventually be. But I've never seen anyone official claim it, and the cars certainly don't drive as though they can quickly understand and respond to situations.


"Alert and aware human" is already a high standard, given how most humans drive in routine mode, which is well understood to be much worse than "alert and aware".

From what I've seen I wouldn't trust autonomous cars to "understand" all situations. I would trust Waymo cars to understand enough to avoid hitting anything (at a level better than a human), at the risk of being rear-ended more often. Everything I've seen from Tesla and Uber has given me significantly less confidence than that.


> I have seen no evidence that any autonomous vehicle currently deployed can react faster than an alert and aware human.

The argument I've always heard is that an autonomous systems will outperform humans mostly by being more vigilant (not getting distracted, sleepy, etc.) rather than using detectors with superhuman reaction times. Obviously, whether or not this outweighs the frequency of situations where the autonomous system gets confused when a human would not is an empirical question that will change as the tech improves.


Lots of people imagine both: that the system will never be distracted and also that it will have superhuman reaction time.

And, like, once the systems are mature and hardware has evolved and so forth, I think that's right. It will. It might today under certain circumstances, like if it gets a really unambiguous sensor input (or it might not).

But I've never heard anyone actually associated with a driverless car program assert that their vehicles have superhuman reaction times today, and the vehicles drive extremely cautiously. I think it's likely that due to their difficulty in understanding their sensor readings, if you look at total time necessary to make a course correction from the point when an obstruction first could be noticed by an alert human driver, driverless cars are not winning and may be substantially losing in at least some cases.


Not a truly autonomous vehicle example but this is a case where most likely the car reacted before the driver was even aware of a problem: https://www.youtube.com/watch?v=APnN2mClkmk

I agree with the sentiment though. This has been a major selling point for this technology, but it has not been sufficiently demonstrated yet.


From the video, it looks like the car noticed the crash at the same time a human would, but the car decelerated immediately, whereas the human might have slower reaction time.

(It certainly didn't predict the crash "seconds before", as the car accelerated into the imminent colission, until <1sec before collision.


I live and commute in Waymo country, and see evidence of quick reactions, though I can't say for sure whether it's an alert human taking over. Mostly, the Waymo vehicles still drive conservatively.


Consider that autonomous braking systems know the distance required to stop and activate appropriately. Try getting a human to mimic that.


Humans correctly gauge stopping distance and "activate appropriately" like tens or hundreds of billions of times per day. Try again.

Something that is endemic in HN discussions of driverless cars is commenters who dramatically overestimate the dangers of humans driving. Like an order of magnitude or more. You see it all over this thread, in which people imagine that killing one pedestrian in like let's say 10 to 20 million miles driven (with those miles overwhelmingly done in unchallenging conditions) constitutes "vastly safer than human drivers" rather than "vastly less safe than human drivers."


> Humans correctly gauge stopping distance and "activate appropriately"

Toyota has enough UX data that they added "brake assist", turns out that a lot of accidents happen when the user stomps the brake, but then releases, just never presses it hard enough to come to a complete stop, or was trained before anti lock brakes.

https://www.youtube.com/watch?v=grcuorbrYxA

Another scary thing about braking is that most cars probably don't have their brake fluid changed often enough; most owners and even dealers treat it as a lifetime interval, when it may be closer to an annual thing interval.


Where did you get 1 in 10million miles from?

Driverless cars simply don't have the mileage yet to prove they are safer than driverful cars. But there's also no indication they are less safe, based on casualty rate so far.


Waymo said that they'd hit 4 million miles back in Nov 2017, they seem like they've done the most miles, there are several other contenders, so I took a wild guess, trying to err on the side of overestimating # of miles.

There is an indication that they're less safe! They have (very conservatively) 5x the number fatalities per mile driven! Now, look, the error bars on that estimate are of course massive. It is plausible that they are much safer and they just rolled the dice and got unlucky. But this is data, as long as you include the error bars.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: