Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While I usually agree with your advice, I think this approach, while theoretically correct, is actually damaging to the majority of your audience here. The closest analogy I can think of is that it's like requiring users to change passwords every 30 days: great in theory, but in practice it's a disaster.

The problem is that it is simply untenable for all but the highest-profile sites. Finding good ops people, even in the bay area, is extremely hard. Most sites are only going to realize that a security update has been released when their package manager tells them it has, and updating that package is usually a 30s process.

The companies I've seen have a hard enough time keeping track of security updates with package management. For a small to medium team with two, one, or even zero dedicated ops people, asking them to custom-compile (for example) a webserver, ruby implementation, and other critical libraries (openssl, glibc, etc.), subscribe to the relevant security mailing lists, and follow along with updates and security patches is tantamount to having them leave unpatched vulnerabilities on their systems for months or even years.

If you ask your users to change passwords every 30 days, there are inevitably going to be a few who take it seriously and generate and remember secure passwords every single time. But the vast majority are going to use weaker passwords than they otherwise would have, and duplicate those passwords across accounts as much as they can figure out how. Likewise, if you ask already overworked ops guys to manually compile and keep track of security vulnerabilities for their webserver and dozens of libraries, a few are inevitably going to keep on top of things and release fixes minutes after vulnerabilities are announced. But the vast majority are going to simply give up after a month or two and be significantly worse off than if they just use Ubuntu's automatic security package updates.



I don't know what to tell you other than that this is something you actually have to get good at if you're going to run a high-profile app or hold sensitive information of any sort.

I'm not saying you need to be able to find your own nginx vulnerabilities or even write your own patches. But reinstalling nginx or Apache from source shouldn't be a science project for your team; you should know that you can get your prod servers running on a from-source build.

Sink in the time to make sure you can do that now, so you aren't caught totally flat-footed when an emergency happens.


I agree with your intent. I do. If you're a high-profile site, or you're storing extremely sensitive customer data, this is absolutely something you need to be doing.

But your audience here is startups. Almost always, these startups are cash-strapped, time-crunched, and have zero dedicated ops guys. Ideally, even these types of businesses would prioritize security to the level you're asking.

In reality, as I said in my previous comment, almost none of the startups I've worked at have had the ops capacity necessary to handle this. Even with package management, servers go months without having critical security patches applied. Asking these types of companies to do something that increases the ops overhead necessary to apply patches is going to result in a worse outcome. Keep in mind that it's not simply compiling from source and having infrastructure to apply security patches across multiple boxes. It's also keeping an eye out for reported vulnerabilities — and not all projects have dedicated security mailing lists. Not all projects even report this information via mailing lists.

I wish it were different. I understand where you're coming from. But the incentives are set up ass-backwards, and until companies start having serious liability for data breaches, protecting customer data simply isn't going to be a priority. In the meantime, encouraging them to set up their infrastructure in a way that requires even more ops effort when they're already struggling to keep up is going to have an adverse outcome.


I'd like to think that if a company doesn't have the resources to properly maintain security patches on their deployed applications, they would use some app hosting platform (like Heroku), but I suspect that there are indeed many companies who are set up like you describe.

That doesn't change the fact that relying on your OS package maintainers to properly update packages results in the same "having them leave unpatched vulnerabilities on their systems for months or even years" (at least the "months" part if Ubuntu is any indication).

This would seem to be a "damned if you do, damned if you don't" scenario.

Also, the 30 day password change thing isn't even something that's "technically correct". It's a "why do we cut the ends of the roast off" vestige from old DoD recommendations.


Hot-off-the-presses updates often contain serious bugs themselves as well. I lost count of the number of Drupal patches-to-patches-to-updates I've had to deploy, each coming within days or sometimes hours of the prior one. It got to the point where unless it was a critical remote vulnerability I would never apply a fix that had not been out for at least a week with no subsequent serious bug reports.


How do you know what is or isn't critical? Do you just take it on faith that the vulnerability announcement is correct?


while theoretically correct

It is actually practically correct. Theoretically, there shouldn't be errors. But there are.


Perhaps if you can't afford to build a safe system, you should reconsider if you should be in the business of building unsafe ones?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: