Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder if the AI is going to overfit traffic patterns and behavior. The rest of the world doesn’t drive like they’re in Phoenix.


I think it's reasonable to expect that overfitting is on the radar for Waymo (no pun intended) given that 1) their parent/sibling company's expertise in the domain and 2) that they've been extensively training outside of Phoenix both in real-world and simulation.

To continue down this path, I wonder how easy/hard it is for the Waymo "driver" to adjust to driving on the other side of the road.


> driving on the other side of the road

just flip the X axis of the camera input!


I think the mid-game probably needs to be regional behavior fitting anyway.


If you consider the rules for human defensive driving as gospel, the safety side of this problem seems to go away. Defensive driving doesn't act based on predictions of what external agents _will_ do, which may have regional variation. It identifies what external agents _could_ do, which does not have regional variation, and engages accordingly with rules and physics e.g. by making sure that the vehicle always has a safe escape path if it needs to stop abruptly or swerve to avoid a collision.


In the mountains of Virginia, you are lucky to get 6 inches between two bumpers at 55 mph, especially up by Roanoke.

A defensive driving tactic might get you killed because you’ll have to literally stop to get more than a car length in front of you for longer than 1s.

Regional driving behaviors are to be expected, and they expect you to do the same. If you don’t, you might cause an accident. Humans are good at this. Machines, not so much.

I’ve driven around the country for several hundred thousand miles. Different parts of the country do things differently.


Is there any evidence that the Waymo approach involves "the AI" at all? I think you're projecting the approach of other, failed self-driving efforts onto Waymo.


Waymo uses ML/AI extensively. Here's a blog post from earlier this year about how they're forgoing CNNs for a "hierarchical graph neural network." https://blog.waymo.com/2020/05/vectornet.html


Is the concern that these nets might be trained in such as way as to infer the wrong thing in cities other than Phoenix?


I doubt they'd blindly deploy in another city with a network trained entirely in Phoenix. That's not something even an amateur MLE would do.


Yes, overfitting to the training set is a problem with AI.


Are you suggesting that Waymo might not use machine learning? What do you imagine they do instead?


No, I understand they use NNs for object classification and so forth, but compare their approach to Tesla, where the overall architecture is camera → a miracle → steering actuators. It seems from the outside that Waymo relies less on the black box.


Also rule based systems / manual decision trees can overfit :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: