What I’m interested in is whether they counted hypothetical accidents. As far as I know, there always has to be someone in the driver’s seat, ready to step in at any time. How often did that person have to step in? Now that would be an interesting metric, especially when plotted over time.
As is, it should be pretty unlikely for the autonomous cars to cause accidents, even if they were much less reliable than human drivers (since there is always a human ready to step in).
I think autonomous cars are awesome and very clearly the future, but I find it hard to believe that humans never had to step in, especially not in the beginning. I mean, that’s why they do those extensive test drives, right? To find bugs that they can squash.
Computer driven cars also have the option of just stopping when things aren't looking good (analogous to a panic). For all we know they could stop when a leaf blows across the road - unlikely to cause an accident because of the reaction time.
The "just stop" approach isn't available for planes or nuclear power plants for example.
I don't know where you live, but where I live the highways have lots of stopping during busy times. And of course drivers always have to be aware of stopped cars as they do happen. I suspect highways are where the automated cars have the easiest time, and it is far more likely side roads and suburban areas that are more confusing. Abrupt stops are more likely there (eg kids/pets running across the road). In the US I do believe that running into the back of another car under almost all circumstances makes you at fault.
Where I live a panic stop in heavy traffic on anything called a "highway" could get you killed. I imagine a lot of the stops we see on the highway that do not result in accidents are made safer by the network effects of lots of cars doing the same thing over time and communicating visually. It's different in nature than somebody (or some machine) getting confused and stopping suddenly in the middle of the road.
Don't get me wrong: things aren't so grim for the "just stop" approach of dealing with a problem. There's no reason an automated vehicle won't be able to communicate with other cars, warn everyone of an emergency stop (at a minimum), even find the best path off the road, depending on how serious the problem is.
How would a full switch to automated cars work anyway? Would they require different roads?
If their functionality included stopping on a dime at a duck on the road, and human reaction speed isn't equivalent, it isn't safe to human drivers. But cost and manufacturing has to factor in before we can just replace every car on the road. People still drive 20 year old cars.
I expect it will be a mixture of carrot and stick like any good phased replacement. Dedicated lanes (since they can drive 2ft from the car in front) and higher speeds because they are able to cut the reaction time down so much lower.
Eventually making it mandatory for some classes of roads (motorways / highways).
At some point in the very distant future your classic car will only be good for showing off on track days or classic car meets.
Actually, it kind of is available for planes (at least the autopilot systems). See the AF447 accident and how the autopilot disconnected itself when it started getting conflicting data from the pitot.
Agree. Did humans step-in when they noted conditions which they believed the car unable to handle and perhaps prevent the majority of likely accidents?
My suspicion is that the figures quoted around these Google cars and their safety record, if you take them at face value, lead to a ludicrous level of optimism.
I think that Google have simply done a better job of creating a self-driving car than you are willing to let yourself believe.
They have done this by attacking the problem from the 'Big Data' mindset which Google are intimately familiar with. Instead of trying to crack hard AI problems (computer vision being the most obvious) they are simply counting on being able to record and use enough information both before and during the actual driving that they simply avoid these problems.
From what I have seen this approach seems to be working. My real concern is it will not be economically viable to integrate the kind of sensor arrays they are depending on into a real car (2 radars and the laser scanner being the killers).
Indeed. The whole self-driven car isn't a new idea - various individuals and university disciplines have been trying the idea for the last decade over in Europe - it's legal to test these ideas on the road in some countries over there.
If you make the assumption that all cars will be AI, then traffic-handling becomes very, very simple. It's not obstacles or accidents that cause serious problems for traffic, it's the fact that you can't know what all other drivers are thinking.
If all vehicles behaved to a certain rule set then all vehicles actions could be predicted, and as a result designing logic to accommodate that is made much easier.
That might work if the road system were entirely isolated from the rest of the world, but it's not. There are walkers, cyclists, animals, children and other random obstacles that might exist, none of which will ever be controlled by computers.
The idea of all cars following the same set of rules even if they are all AI driven is actually way too complicated. Bugs and misunderstandings of the standard would cause deviations that the other cars would need to react to anyway.
They are already putting radars into higher end luxury cars as part of their cruise control systems. Granted, google cars involve a good bit more than those systems, but I am fairly confident that the usual trend of features trickling down into normal consumer cars will continue.
I work for an insurance company that specializes on auto insurance. It'd be filthy rich if people only had an accident every 300,000 miles.
The company is pretty well off, but pays 95% of the premiums in claims payment.
Doing a very back-of-the-envelope calculation, 22.5% of our insured cars had a claims payment this year (august to august), so an average insured person has a traffic accident every 4 years or so, and does way less than 300,000 miles every 4 years (more like 100,000 being generous).
I was saying not that it wasn't better than average but that it wasn't amazingly good. I'm pretty sure my mom has gone 300,000 miles between accidents; (growing up, we went on a lot of road trips.)
all I was trying to say is that I think the best human drivers can pull off three hundred thousand miles; it's not inhuman. and that's what we are going to need for self-driving cars to become widely accepted; something that would seem impossibly good to a human. And I bet they will do it, eventually.
I was responding to
>My suspicion is that the figures quoted around these Google cars and their safety record, if you take them at face value, lead to a ludicrous level of optimism.
and I don't think 300,000 miles without an accident is a ludicrous number; it's something a human can achieve, (even if it's something most humans don't.) It's a milestone, sure, but like I said, for self-driving cars to be accepted, they can't just be better than the average driver; they need to be better than the best drivers.
Ah, ok :) , I hadn't understood your statement correctly.
I'm looking forward to self driving cars because of their convenience ! Imagine being able to sleep during road trips, using them as a taxicab for places where there's no parking (and ask it to park itself :) ), sending them to pick up somebody... the possibilities are endless !!
As is, it should be pretty unlikely for the autonomous cars to cause accidents, even if they were much less reliable than human drivers (since there is always a human ready to step in).
I think autonomous cars are awesome and very clearly the future, but I find it hard to believe that humans never had to step in, especially not in the beginning. I mean, that’s why they do those extensive test drives, right? To find bugs that they can squash.