Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How about the NSA and similar institutions in other countries don't hoard zerodays and instead actively work to improve security? Defending against hackers is a lot easier than defending against nuclear missiles. We don't need an active deterrence if the defenses are good.


While I agree and I hate the NSA's practice of hoarding zero days, they do this to keep software vulnerable on purpose. They've invested millions in uncovering zero days SPECIFICALLY so they could then spend millions more to be the only ones exploiting these vulnerabilities. If they share the zero day, the vendor will write a patch and it's game-over for their multi-million dollar exploit.

Don't get me wrong, it's entirely inappropriate for an American government agency to be subverting the security of American products made by American companies and used by the American public. We have a unique opportunity that no other country has to coordinate our secret intelligence work with the private sector to completely own the security industry, but we allow our government to take an adversarial approach to defense. It's insane and criminal, but that's how it is.

Imagine how effective an intelligence campaign would be against, say, Iran, if the NSA went to Microsoft, disclosed a vuln, and then told them to patch the vuln ONLY in specific markets. Roll out the patch to American companies first, but leave select targets still vulnerable. No other country could get Microsoft to do that except the United States... And we waste that opportunity by trying to lone-wolf everything.


> Imagine how effective an intelligence campaign would be against, say, Iran, if the NSA went to Microsoft, disclosed a vuln, and then told them to patch the vuln ONLY in specific markets.

While that approach might be effective, the long-term result could be increased distrust of American software companies by other countries, depending on how it's done.

I think the primary problems are structural: that (a) a single agency has to act as both red and blue teams, (b) the red team lacks enough oversight to ensure it respects rule of law and civilian civil rights, and (c) the blue team isn't authorized to disclose all its known vulnerabilities to software vendors.


> the long-term result could be increased distrust of American software companies by other countries, depending on how it's done

America isn't Australia. The NSA hoards bugs in software domestic and foreign alike.

One amendment to their charter which might make sense would be installing a public ombudsman. This ombudsman reviews the NSA's vulnerability hoard, takes into account the software's usage in the United States and then makes arguments to senior leadership for selectively releasing them.


It seems to me that the public ombudsman role you propose is already played by the other agencies and departments involved in the Vulnerabilities Equities Process https://www.whitehouse.gov/sites/whitehouse.gov/files/images...

I would specifically call out Treasury (high dependence of financial sector on publicly available products), OMB (high dependence of government systems on publicly available products), and Commerce (most direct purview of publicly available products) in playing that role.


I'm sorry, but this seems a little naive. 1) It assumes that the NSA would only want to exploit a vulnerability exactly once, 2) this behavior would be fairly obvious since hackers examine patch payloads, and 3) this behavior on the part of American tech would undermine their credibility globally. Wouldn't you reasonably be very skeptical of Microsoft if you were a foreign government if you knew they were actively working with the American government to pwn you?

The status quo of a semi-adversarial relationship between the NSA and tech companies for offensive cyber capabilities seems better to me for both parties. Of course it would be good if they were collaborating on defensive cyber security.


> I'm sorry, but this seems a little naive.

https://en.wikipedia.org/wiki/NSAKEY.

I thought everybody already knew that US corporations serve as an extension to the surveillance apparatus. Remember all the corporations fighting against the government's mandate at an artificially crippled maximum keysize of 40 bits, in order to allow continued surveillance in the 90s? Yeah, neither do I.


The claim is not "naive" as in "of course the NSA wouldn't want to exploit things, they're innocent angels", the claim is "naive" as in "they have better ways to exploit things."

Interpreting _NSAKEY as an NSA backdoor is similarly naive. First, it's named _NSAKEY. Surely they could name it something else. Second, its purpose was reverse-engineered, and it's capable of signing cryptography modules, same as the existing Microsoft key named _KEY. Anything that could be done through _NSAKEY could also be done through _KEY, so it would be easy for the NSA to just ask for a copy of _KEY such that nobody would notice. The conspiracy theory makes no sense - it's like saying "$politician is trying to take away our freedoms by pouring mind-control agents into the water" when $politician is just straight-up signing bills to take away your freedoms.


It was a debugging symbol that a Microsoft developer either negligently or heroically included in a public release... so that explains away the "nobody would be so stupid" argument. You are aware of how the Intel ME killswitch was located right? A commented xml file included with the flashing software helpfully informed anybody willing to look that a field was related to the NSA's High Assurance Platform program. This was after ten years of security researchers pointing at the fact that this was a backdoor. For whatever reason both Intel and the NSA were happy to let the public remain needlessly vulnerable all that time... But yeah, I'm just like one of those water fluoridation loons. The NSA wasn't at all hamfisted in the intentional weakening of elliptic curves and blatant RSA bribery, this isn't an obvious pattern emerging.


NSAKEY people have had over two decades to produce any evidence in support of their weird conspiracy theory, but strangely enough they’ve utterly failed to do so.


The demand for evidence in the wake of all the NSA leaks is laughable.[0] What does evidence of the NSAKEY being a backdoor look like to you, a provably malicious CSA shim, signed by the key, hand delivered by James Clapper?

I'll tell you what it looks like to me:

After the debug symbol is found, Microsoft gives a seemingly very stupid explanation for it[1]: "It is a backup key. Yeah, uhhhh... during the export control review - the NSA said that we had to have a backup key, so we named it after them..." After being challenged on the plausibility of their backup scheme they refuse to provide any further explanation.

Here is the funny part: Microsoft might be technically telling the truth about it being a "backup". Consider what else was going on around this period: ridiculous export controls on key-length, the clipper chip... and finally: government managed private-key escrow[2]. At that time the export regulations did not specify a backup requirement, and yet Microsoft claims otherwise. You know who else was talking a lot about backups? The Whitehouse, in its proposal for allowing the export of key-lengths above 56-bits - so long as applicants implement "key-recovery".[3] Somehow I don't think that we share the same definition of the word "backup".

Also, ECI Sentry Raven[4], have fun with that.

[0] https://assets.documentcloud.org/documents/784280/sigint-ena...

[1] https://cryptome.org/nsakey-ms-dc.htm

[2] https://web.archive.org/web/20000818204903/https://csrc.nist...

[3] https://epic.org/crypto/key_escrow/key_recovery.html

[4] https://archive.org/details/nsa-sentry-eagle-the-intercept-1...


Evidence of the NSAKEY being a backdoor includes some description of how the backdoor might work, backed up by a reference to the relevant Windows source code or its disassembly, both of which are easily available to researchers. What sort of backdoor is it? Does it provide remote access to Windows? Does it enable certain cryptographic modes that are disabled? Does it disable certain cryptograph modes that are enabled? Does it trigger key recovery, and if so, how?

Evidence of X does not include "X would have been done by Y, and Y did Z, and X and Z are both bad, so why wouldn't Y do X too." That is basically the definition of an ad hominem argument. Whatever else the NSA may have done, and however much it's reason to believe the NSA might have wanted to do this specific thing, it's not evidence of them doing this specific thing (and again I'm not sure what this specific thing is even supposed to be). And if anything, the lack of mention of NSAKEY in the leaks is a reason to believe that there wasn't anything there.

Evidence of X also does not include "Y refused to talk about X." That might be evidence that Y is suspicious and untrustworthy (or evidence that the person asking was a conspiracy theorist who wouldn't be satisfied by any explanation), but it's not evidence that Y actually did X.

So, that's my definition of evidence. I'll turn this around: what would evidence that NSAKEY was not a backdoor look like to you? Would anything convince you, or is your claim unfalsifiable?


> Evidence of the NSAKEY being a backdoor includes some description of how the backdoor might work...

It would only work one way with an API relying on a PKI with a single CA, zero transparency, and trusted keys named after spy agencies suddenly appearing out of nowhere. I'm gonna bail here, because I'm now not sure if you honestly don't know what the CAPI was in relation to the NSAKEY - or if you're trying to waste my time by getting me to explain the most basic principles of public key infrastructure.


Here is a basic principle of public key infrastructure: anything signed by one CA can be signed equally well by another, unless the code is designed to give one CA special permissions (like EV certs, in the HTTPS PKI).

You are wrong on the facts that there is a "single CA" - there is _KEY in addition to _NSAKEY.

So, this brings me back to the point I mentioned at the top of the thread: why didn't the NSA just demand a copy of the private key for _KEY instead of a separate key? A separate key always carried a risk, and also required a rebuild - handing over _KEY could have happened immediately. If _NSAKEY has special permissions, can you point me to where in disassembled CAPI code / leaked source these special permissions are implemented, and what they are?

Your conspiracy theory is "The NSA is evil and also stupid." This is a more complex and less likely, and less worrisome conspiracy theory than "The NSA is evil." If the only thing we have to worry about from the NSA is things bungled as badly as this alleged _NSAKEY backdoor and the actual Dual_EC_DRBG backdoor (which was noticed by cryptographers basically instantly), we have nothing to worry about. That doesn't seem like the rhetorical position you want to take.


It really feels like you’re trying to distract from the fact that you have no idea how the supposed NSAKEY backdoor works if it exists.

How would the signed payload to activate this backdoor be delivered? Where’s the code that receives it? Where’s the code that then processes that signed payload?

It’s not like this stuff is terribly hard to reverse, you’ll almost certainly be able to easily find all the symbols and probably even leaked source on various NT-related forums.


Yeah I don't think my comparison to fluoridated water is out of line. The entirety of the NSAKEY evidence is "it has NSA in the name." That's not even as strong as the evidence that fluoridated water has minimal health benefits and more risks than the government claims, which is weak evidence but at least it exists.


> The entirety of the NSAKEY evidence is "it has NSA in the name."

Your comparison is out of line because of ridiculous characterizations like this. Microsoft said that it was a backup key, which either means that they have the most poorly implemented scheme for backing up cryptographic materials ever devised, or they don't mean what most people think when they hear the word "backup". Microsoft then claimed that the backup was necessary for passing the export control review, which is a bold lie to tell since the Export Administration Regulations are available for review to everybody. One thing not included in the EAR that might influence Microsoft's conduct in trying to get permission from the USG to reach global customers: executive orders. The government had a hard limit at 56-bits and was proposing that anybody wanting to export crypto beyond that needed to participate in their push for private-key escrow, which they were calling "key-recovery". Recovery... sounds kind of like a backup plan...

I provided links in my response to the parent comment.


None of the links you provided are evidence. They're all signs that something, somewhere, is fishy, so why wouldn't this be fishy too. I can provide you higher-quality links about how we need to stop putting fluoride in the water.

At the very least, retract your claim about how people who don't want fluoride in the water are "loons," and then maybe we can have a good-faith conversation. But if you want to dismiss people with actual science backing their views as loons, I'll dismiss you as a loon, too.


> 3) this behavior on the part of American tech would undermine their credibility globally. Wouldn't you reasonably be very skeptical of Microsoft if you were a foreign government if you knew they were actively working with the American government to pwn you?

The fact that foreign governments aren't more skeptical of Microsoft really baffles me. The American government isn't dumb enough to buy security products from Kapersky after all, or devices from Huawei.


Maybe that was part of the motivation for Microsoft moving toward open-source. Showing their cards in a way that can be audited and verified to ease any potential foreign organization's concerns. Software is part of it, but hardware is the harder sell.


Afaik that's only for cloud. There the software runs on an opaque virtualization layer.

Windows is still closed source. Therefore there is a lot of speculation around the phone home capabilities.


I read a fantastic article on Stuxnet. I can't find it now, but this should cover it, if you don't know the details https://www.csoonline.com/article/3218104/what-is-stuxnet-wh...

To pull of what they did, they knew multiple zero-day vulernabilities in Windows. To any reasonable security-minded person, knowing that many vulnerabilities and having the ability to capitalize on them is likely only achievable a few different ways, one of those ways being having an arrangement with the company whose vulnerabilities you were exploiting.


> Wouldn't you reasonably be very skeptical of Microsoft if you were a foreign government if you knew they were actively working with the American government to pwn you?

I would. However, I'm in no position to deny that I need Microsoft's products. Assuming I'm Iran, I'm not going to convert my entire digital infrastructure away from the status quo. I literally won't be able too anyway.

Now if I'm America and we're talking about Huawei undermining my customers.... Yeah I don't have to put up with that and Huawei will lose. I don't care, I've got Samsung and Apple. So I see your point, but you're misrepresenting the scale.


>Roll out the patch to American companies first, but leave select targets still vulnerable.

I would pay for tickets to the hacker news thread if that ever actually went down


Don't you think it's also dangerous?

There were stories that US power plants could be made inoperable. Also how do you really monitor bad actors using those vulnerabilities to do other kinds of damages?

To me it doesn't seem cyber weapons are that easy to use, they're volatile, secret, can be fired very quickly without sound, and are very difficult to control.

So far one could say the security market was not big enough, and computers not widespread enough, so it was tolerable to let the 0day market open as long as the US has the upper hand, but as this market goes bigger, damages are going to show up more and more.

Having the upper hand in term of cyber warfare is one thing, but I have a problem with what the cyber warfare terminology. I'd rather live in a world where weapon trafficking is neutered, instead of a world where bandits can cause damages (small, but still damage).

I think the day is coming where higher security standards will be required by law, because the swiss cheese strategy won't work for long.

Also, I have doubts that the NSA can really keep the upper hand because they have more brainpower. The cyber warfare is not only the sum of the weapons, because anybody can be taught computer security and learn how to build weapons.


>While I agree and I hate the NSA's practice of hoarding zero days, they do this to keep software vulnerable on purpose. They've invested millions in uncovering zero days SPECIFICALLY so they could then spend millions more to be the only ones exploiting these vulnerabilities. If they share the zero day, the vendor will write a patch and it's game-over for their multi-million dollar exploit.

What happens when those zero days are leaked or stolen?[1]

The government couldn't even protect nuclear secrets.[2]

At least with nukes, it's difficult to obtain the fissile material. Even the poorest, smalled, most isolated countries can utilize stolen or leaked 0-days, and depending on how they're used they could kill more people than a nuclear bomb. (Eg: shut off the power to the Midwest during winter)

[1] https://en.wikipedia.org/wiki/The_Shadow_Brokers

[2]https://en.wikipedia.org/wiki/Atomic_spies#Notable_spies.


> Imagine how effective an intelligence campaign would be against, say, Iran, if the NSA went to Microsoft, disclosed a vuln, and then told them to patch the vuln ONLY in specific markets.

Not very. Do you think Iran doesn't have a Windows VM somewhere within the US and is also incapable of reverse-engineering patches?


I'd be surprised if they bother, at that point I'd just fork Linux and go from there...


You’re surely right about the incentives at play, and why no intelligence agency is going to spend millions doing x company’s validation work for them. But how in the world would selective patching work? How do you hide that?


"We are testing beta features on different customers based on regions."


Microsoft would quickly lose any trust they might have left in foreign markets. Plus, I think elevating the scope of NSA is a bad idea, since they already misused their power as evidenced by multiple leaks. It would be an overall terrible idea.

The cyber warfare scare is just a false reiteration to fortify arguments for further military and intelligence arming.

> It's insane and criminal, but that's how it is.

Just insane and criminal in my opinion because these institutions are not beyond reproach.


How would you hide these targeted updates from foes? I’m sure they can at least check the hashes of the updates, if not more. Presuming we’d be targeting their military industrial complexes who one might presume monitor for these kinds of things.


What stops non-USA companies from applying USA available patches?


Imagine how effective an intelligence campaign would be if the NSA went to Microsoft and let them place backdoors for specific targets.

Where would you draw the line?


This is one of those situations that makes me wonder at how obvious the right way to go is, and how unlikely that is to happen. Offense/defense costs are not even close to being symmetrical, it is insane that the USG would advance the state of the art in electronic warfare - while not even pretending to try and match the effort in defense. This is why we abandoned our biological weapons program, we were effectively developing the technology for incredibly cheap weapons of mass destruction that any banana republic could mimic... not unlike the rootkit leaks.


I think I'd argue the opposite. I wouldn't compare hackers to nuclear missiles, but defending against hackers doesn't seem to be realistically possible. Consider the state of the security world now, how many piles of money already get thrown at security, and how vulnerable everything still is. With that in mind, I find it hard to believe that even in the fantasy world of all of the "good guy" nations, whoever you think those are, working with vendors to fix every flaw they find instead of holding them for later exploitation against selected targets, that the security landscape would be meaningfully different. Somebody would still find and hold zerodays and use them in attacks. Whoever that is would have a huge advantage against whoever didn't.

I suppose in a perfect world, we'd all melt down all of our guns and turn them into wrenches or something instead and all live in harmony. But we don't live in that world. In the real world, if you melt down all of your guns, someone else will keep theirs, and use them to take your stuff, because you can't hurt them anymore. And similarly, if you disclose and patch all of your zerodays, someone else will still have their own, and will use them to hack your stuff and cause you damage, and you'll have no way to fight back, except to break out the guns and start a hot war with them.

We're still learning how things work in the cyber-war realm, but I feel doubtful that it'll ever be possible to have defenses so good that you can rely on the fact that nobody can touch you.


What NSA does to China is irrelevant. But they hurt their own American companies in the name of cyber security. I personally see this as British King using East India company to build colonies outside.

This will have ugly consequences.


I suspect there's a very real fear that their surveillance capabilities will completely go dark and they'll have just no way of going the extra mile on intel gathering when it's sorely needed.

I'm not a supporter of this but I do believe there is a genuine, good faith act here.


Their primary mission isn't the discovery and publication of critical zero-days but it doesn't make them good faith actors if they occasionally do when it's both convenient and strategically advantageous.


The NSA is working to actively improve security- for the US government. The NSA should not give security secrets to our enemies abroad...


The problem with this sentiment is that every other country is technically the enemy. The NSA has no qualms about stealing corporate secrets or influencing governments of allies who send their kids to die on your battlefields.


> is that every other country is technically the enemy

No, they aren't. The NSA shares many secrets with allies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: