Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm pretty sure in reality Facebook also uses your social network graph to restrict the candidate set and get higher recognition accuracy. This makes it hard to compare Facebook's results to a pure recognition algorithm.

On that note, Facebook could do even better by restricting the candidate set based on time, precise location, and compass orientation, given that most mobile users have Facebook installed and are running it in their pockets when they get their picture taken by others. (They could do rough recognition purely based on position and orientation without even looking at the camera image, if they really wanted to, so with the camera image it could really be near 100% accurate, and even work if you take a picture of a friend's back.)



DeepFace's figures[1] also come from running it on the LFW dataset[2]. While Facebook's production tech will be more accurate because of what you say, the Deepface algorithm has a lot of raw power, too!

[1] http://www.cs.toronto.edu/~ranzato/publications/taigman_cvpr...

[2] http://vis-www.cs.umass.edu/lfw/index.html


Sure, but the numbers in the research paper are "pure" (i.e., not using all this additional information). In production, I'm sure they must be using all these additional cues as well.


Ah, I see, you mean the 97.35% is a "pure" algorithmic result. That's pretty impressive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: