Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As another slightly less draconian approach than actually trying to ban vulnerability research, what about mandating a market where researchers have to sell to the developers and developers are obliged to buy from the researcher, with a middle-party setting the prices.

That might serve to provide incentives to the vendors who aren't already working actively on their security to do so, and also reduce (although obv. it wouldn't eliminate) the number of vulns going straight to offensive products.



So I have two options: I can sell work product in a market controlled by "the FCC of vulnerabilities" with capped upside, or I can work directly as a 1099 contractor at an enormous daily rate for firms that will pay to get vulnerabilities before vendors do.

Why would I take the market route?


In this case what are the firms who pay these rates doing with the information?

Defensive work (e.g. IPS vendors) well after they've got their early day protection they can just sell on to the vendor.

Offensive (e.g. "cyberweapons" ugh I had that term). well that's the point I'm making that whole industry is bad for defensive security as it involves keeping vulns secret as long as possible, so they can be used by them.

Gov's have to make a choice whether that's an industry they want to encourage, be neutral to, or discourage.

But given this line of thinking is one you'd disagree with, what option for addressing the problem do you prefer?


What difference does it make what they do with it? Stipulate for now that they use them to hack Russian and Chinese computers. That is, stipulate that there is a good public policy reason to regulate that kind of work. How would you accomplish that regulation? What, exactly, would you ban?

If you can't articulate a reasonable and effective regulation that would control vulnerability research, regulation will do more harm than good: it will wipe out beneficial research and drive talent towards malicious research.

It's not on me to come up with a way to "address" the "problem". Doing nothing seems like a more credible response than trying to outlaw specific kinds of computer programming.


Indeed, however in the same way as it's not up to you to "address" the "problem" nor is it up to me :)

Doing nothing would seem like a losing response given the current swing of events but I'll defer to your greater experience.


BTW I'd respond to the other threads but HN seems to object to deep threads.

On "flaws fixed rather than concealed" sure in a perfect world they do, but limited resources == prioritization and some systems and packages inevitably get left behind, insiders know which those are..


What do I, as a user and customer of software, care why companies harbor known security flaws (that, after all, is what you're talking about)? Hope is not a strategy. Those flaws are going to be discovered whether or not insiders leak them.


I'm not sure where I said hope was a strategy, I said insiders might have information that's of value to attackers (unless I got my threads mixed here)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: