Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> but honestly bots clicking links is just what happens to every public site on the internet.

As a CS student ~20 years ago I wrote a small website to manage my todo list and hosted it on my desktop in the department. One day I found my items disappearing before my eyes. At first I assumed someone was intentionally messing with my app but logs indicated it was just a scraping bot someone was running.

It was a low stakes lesson on why GET should not mutate meaningful state. I knew when I built it anyone could click the links and I wasn’t bothered with auth since it was one accessible from within the department network. But I didn’t plan for the bots.



Reminds me of the Spider of Doom which was a similar issue where "Get/Delete" links were hidden by simple javascript to see if the user was logged in. All of a sudden pages and content on the website began to mysteriously vanish.

You know what doesn't care about Javascript and tries to click every link on your page? A search engine's web crawler.

https://thedailywtf.com/articles/The_Spider_of_Doom




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: