Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, but software is complex, and we don't have tools to effectively analyze its code. The scanning solutions currently available in the market are really crude, and most of them perform behavioral analysis looking for very basic vulns.

In case of AI models, brute-forcing is much easier as their input channels are limited. Also, they are probabilistic by design, so hardening them is much more difficult than conventional SW. Code leak is one thing, things can get really bad if the prod weights are leaked.

However, the cost of GPU computation is working as a big deterrence, for now. It's expensive to scan a model for vulnerabilities with massive parallelism. But, it also means it's difficult for developers to verify their models, so manual guesswork is still a valid attack strategy.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: