The term of art for this kind of attack is SPA (simple power analysis). The fact that this is possible means that this is amateur hour engineering, with very little security QA. The more sophisticated attacks use DPA, differential power analysis, which accumulate the results of multiple runs and performs statistical analysis.
When I was on a team that made a similar device over a decade ago, we were up to our eyeballs in academic papers about SPA and DPA hardware and software countermeasures. That doesn't mean we didn't make any mistakes, but at least we were knowledgeable enough to hook an oscilloscope up and see what's going on.
These guys completely ignored an entire class of attacks, known in detail for a couple decades, (in reality since 1943, declassified 1972 ([1]). I wouldn't trust these guys to protect anything of value.
There is a software solution called blinding. The idea is to 1) scramble your input (message, private key) by multiplying with a random blinding factor, 2) do the crypto math and 3) divide out the random blinding factor.
Since all the crypto math happens with what is now essentially random numbers, you gain no knowledge from power analysis.
Downside of this method is that it requires additional operations and your crypto algorithm needs to be suitable for this. There is literature on this method, but don't be confused by "blind signatures" which is something different.
Any crypto wizz around that can tell me if secp256k1 ecdsa can be blinded?
But this is an enclosed system. Can't those blinding factor values also be pulled via the same monitoring techniques, or at least give enough information to reduce the number of possible values for brute forcing them?
Definitely not an expert, but I think this defends against differential power analysis as you can't statistically correlate multiple runs (assuming you can't force or precisely measure the random number used each time).
In this particular attack, it looks like the attacker measured the power drawn over USB. This could probably be defeated by buffering power.
Include some kind of energy storage on the device, such as a battery or an ultra-capacitor. Your external power source (e.g., USB) charges your storage device, and your computation is powered from the storage device. Only charge the storage device between complete operations.
Someone watching power over USB would then see no power draw during key operations, and a smooth power draw between operations.
If your threat model includes attackers that will be able to probe inside the device while it is doing operations, and so who could look at the power on the far side of the battery or capacity, then buffering power won't be sufficient.
I was thinking the same thing. The equivalent of a power factor conversion stage at input makes this type of attack impossible unless they have access inside the product.
I have absolutely zero experience in things like this, but would an integrated component (so that it can't be ripped off) that draws a random amount of heat (through a controllable variable resistor) solve this?
Random noise would just mean that you have to take multiple measurements and average them. Depending on the system in question, "multiple" may or may not be impractically many.
It's not out of scope for a hardware wallet. It is by design supposed to be losable without compromising your private key. Using an actual secure element (engineered in-silica to thwart this type of side channel exoloit) or even clever software can prevent this attack, or at least make it exponentially more difficult.
You have to enter a pin before unlocking the trezor, and you can add a passphrase to your wallet. Both make the attack much more expensive for an attacker who just stumbled on the device.
Because any process with i/o capability on the host OS can read the private key on the USB?
Short of a oscilloscope hiding in the computer you use (totally possible), no process can derive the private key from the Trezor in theory.
Even if you have processes that blatantly copy every USB's contents (or even log all interactions verbosely) and log all key presses/clipboard interactions on the machine, you can still use a Trezor without compromising anything.
You can also verify that your clipboard is not being manipulated as the Trezor can verify the address it will be signing a transaction with on the display.
I believe they are suggesting that the above comment "Short of a oscilloscope hiding in the computer you use" might not be so far fetched. The hardware needed to do the type of analysis used for this attack has been around a long time and is presumably very cheap at scale. So if for example a nation/company that has control over the manufacture of a large number of USB controllers decided they wanted to be able to do this kind of power analysis on all USB devices plugged into their controllers they could do so easily.
Esentially adding a micro controller with an Analog to Digital converter that can watch the power pin inside the USB controller/port itself would be relatively simple and cheap.
Although they don’t have 20MHz of bandwidth and the necessary sample rate there are USB3 power delivery controllers on the market today which do exactly that. Some even embed the microcontroller too.
Malware on the host PC can steal your private key. An attacker who gets brief physical access (eg, you look away for a minute) can quickly copy a USB drive.
While this attack shows the trezor is probably a bit amateur-hour, it does provide some amount of value.
Just like you can prevent a "paper wallet" from being compromised by not letting anyone casually look at it?
If I invested in one of these hardware wallets, I'd be interested in making it cost at least $X where $X is greater than the value in the wallet. I'd also like some time component, Y, that would allow me to transfer the money before the private key was found.
The main selling point of hardware wallets is that they can interface with an internet-connected device to sign transactions without exposing the private key to that device, which has a much larger attack profile.
Cost $X to attack? That makes complete sense. And if you know the physical device is compromised then you can "break the seal" by restoring the seed phrase on another device to transfer the funds elsewhere before an attacker is able.
There's a whole industry that's doing just that. Look up Hardware Security Modules. It is likely that you have a security device like that in the machine you are using right now (a Trusted Platform Module). People didn't simply threw their hands up, there are solutions for various problems in this space, with physical access being present in the threat model.
>It is likely that you have a security device like that in the machine you are using right now
funny you say that, because TPMs aren't actually mandated to be tamper resistant, only tamper evident[1]. what this means is that you won't be able to extract the keys without destroying the device, but if you delid the chip and probe it, you can probably extract the keys. I suspect it's the same with other HSMs you see in everyday life (smart cards, smartphone with trustzone, etc.).
There are different devices for different security needs. When you're protecting the key material for a revocable certificate, tamper evidence is sufficient: when you detect tampering, revoke the certificate. FIPS 140-2 Level 2 devices provide this level of security and are common in end-user credentials like smart card badges and the TPMs in laptops. FIPS 140-2 Level 3 provides tamper resistance meant to defeat most attacks, that's the device you'd want to use to protect a root of trust or important encryption key. Level 4 devices are meant to hold up against as many attacks as possible, even when the attacker can push the physical operating environment far outside normal bounds (solvents, liquid nitrogen, extreme heat, etc).
i did a quick skim of the article for "resist" and couldn't find anything to back your claim. all the article says is that smart cards have better security because they're isolated from the host (which is a security measure, but doesn't say anything about physical tampering resistance), and that some smart cards have tamper resistance built in.
For true tamper resistance you need to have some way to actually detect tampering and erase the secrets, which usually leads to some battery-backed SRAM and associated tamper response circuitry.
While there are some smart cards and smartcard-like HSMs (Fortezza comes to mind, but it uses the battery primarily for integrated RTC and seems to not contain any tamper detection mechanism) with integrated battery, common smartcards does not have battery.
If you'd like more evidence of what I'm saying, read Smart Cards, Tokens, Security and Applications or Secure Smart Embedded Devices, Platforms and Applications. Both are graduate textbooks covering smart card design and development.
The term "smart card" is frequently misused in popular nomenclature. As a technical term, it refers specifically to contact or contact-less (like NFC) cards with an embedded chip which are, at minimum, physically tamper resistant. For example, a typical credit card is not a smart card. A SIM card is a smart card (or token).
Tamper resistant does not imply tamper proof, which can also be a source of confusion.
There's no absolute solution. But that doesn't mean that they shouldn't have protected against well-understood classes of attack. Vulnerability to an attack that needs 5 minutes of physical access would be much better than vulnerability to an attack that needs 30 seconds.
When I was on a team that made a similar device over a decade ago, we were up to our eyeballs in academic papers about SPA and DPA hardware and software countermeasures. That doesn't mean we didn't make any mistakes, but at least we were knowledgeable enough to hook an oscilloscope up and see what's going on.
These guys completely ignored an entire class of attacks, known in detail for a couple decades, (in reality since 1943, declassified 1972 ([1]). I wouldn't trust these guys to protect anything of value.
[1] https://www.wired.com/2008/04/nsa-releases-se/