Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm just waiting for the first security researcher to exploit the googlebot.


My primary role at Google from 2006 to 2010 was executing JavaScript in the indexing pipeline (not exactly in Googlebot, but close enough). I knew I was executing probably every known exploit out there, plus a lot of 0-days, and took lots of precautions (single-threaded subprocesss with very restricted list of allowed syscalls, etc.). It's not perfect, but breaking out of the sandbox would require a kernel 0-day in the subsystem used by our sandbox, plus a JS engine exploit.


I think it goes without saying that no system is 100% safe. ;)


How do you think the decision is made to insert a safebrowsing interstitial?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: