Hacker Newsnew | past | comments | ask | show | jobs | submit | wpowiertowski's commentslogin

Read the whitepapers, the client side generates the keys and only transmits the public keys to the server. E2EE is truly end to end (as in client to client). Meta has no access to the content of your messages, same has been true for WhatsApp


I never said the keys are sent to any servers.

They keys are still generated and kept using software they wrote.

Second, they also control who they trade the keys between.

This is contrasted to some chat apps (which are painful asf to use) where you have to manually exchange keys, meaning you have to engage with the party you want to talk to and so you can confirm who you are really encrypting messages for. It’s physically impossible to be given the wrong person’s key because you personally had to get them.


> They keys are still generated and kept using software they wrote.

This is a prerequisite for forward secrecy, which is arguably much more relevant.

> It’s physically impossible to be given the wrong person’s key because you personally had to get them.

Does that matter at all if the (in your threat model non-trustworthy) software just exfiltrates all messages?

If you don't trust your encryption software, it's game over (unless it encrypts everything fully deterministically and you regularly audit its outputs).


Well these apps don’t even let you verify the keys even if you wanted to, so you can’t even tell if it’s being man-in-the-middle’d.

Some people said they are finally adding key transparency features to let you do that, but it should have been there since the start. Something a lot of people already use called SSH literally has had that since forever. It’s like basic 101 cryptography if you design an encrypted protocol that isn’t using a trusted third party for key verification (like certificate authorities in TLS/SSL).

If you implement ANY encrypted protocol, key verification is extremely important. If you aren’t verifying keys are possessed only by your recipient, you cannot verify who can read your message.


WhatsApp has always allowed key verification (at least since they've supported encryption), as far as I remember.

> It’s like basic 101 cryptography if you design an encrypted protocol that isn’t using a trusted third party for key verification (like certificate authorities in TLS/SSL).

SSH/TOFU is one model, PKI is another. Both have their respective merits, especially when combining PKI with certificate transparency.


In a way they already do, all political ads on facebook (in any country) are public information through their Ad Library Report: https://www.facebook.com/ads/library/report/?source=archive-...

Simply take their earning reports and divide one by the other and you'll get the percentage of revenue


My usual dialog with siri:

Me: "Hey Siri, play Radiolab podcast"

Siri: Which Radiolab podcast, Radiolab or Radiolab: More Perfect"

Me: "Radiolab"

Siri: Which Radiolab podcast, Radiolab or Radiolab: More Perfect"

Me: "Radiolab"

Siri: Which Radiolab podcast, Radiolab or Radiolab: More Perfect"

...

Me: "The first one"

Siri: "I don't know >the first one<"

Me: "Siri you're useless"

Siri: "That's not nice"

Me: "Could be but it's true"


I setup a similar system but with IPSec (https://github.com/jawj/IKEv2-setup) and Pi-Hole on DO. The best part is that the linked IPSec setup is trivial to install and also generates profile files that leverage the OS VPN capability in any iOS device without needing to install extra apps (and also force VPN connectivity by default so you don't need to remember to enable it)


Exactly why cryptographic authentication is required. There are fuzzing devices today that present themselves as known VID/PID/HID combinations that have generic drivers in modern OS's and can exploit that.

One of the primary drivers for this is the power delivery - imagine a scenario where you have authentic power brick and laptop and buy a counterfeit USB-PD cable, power brick sends 100W over it and it results in the cable melting since it was really a $5 knock-off and you end up with a fried USB-C port in your laptop and possibly a fire. I'm sure most people would wish that the power-brick/laptop checked that cable is genuine and build to handle the power.


That seems like a very weak failsafe likely to be easily cracked and those bad actors only slowed down a bit - if history is any guide.

Why not implement something like detecting voltage drop between charger and device? Short circuit detection, etc. You can do a lot if you can have both sides communicate with each other, and at that point it matters a whole lot less if your cable lies to you plus you catch a lot of other failure conditions.


Apparently, BC 1.2 spec verbiage suggests this is already a hard requirement:

The OTG device is required to limit the current it draws from the ACA such that VBUS_OTG remains above Vaca_opr min.


There are more use cases where being able to prove cryptographically the identity of the device is very useful from security point of view, PD is simply one of them.


> One of the primary drivers for this is the power delivery - imagine a scenario...

Not a USB engineer, but a quick peek at the BC 1.2 spec suggests the ball would have to be dropped on at least two additional fronts for the suggested failure mode to occur:

1.) The Charging Port device vendor for failing to implement any of the allowed shutdown measures suggested in BC1.2 §4.1.4; and

2.) the Downstream Port device vendor for failing to constrain current draw based on sensed Vaca_opr min = 4.1V (Table V) per ibid. §§6.2.2 and 6.3.1.

I can see how the knock-off cable would be toast, and depending on insulation rating, may potentially catch fire, but struggle to see how either the CP or DP USB-C ports would fry (considering they're both designed for a target power).


USB-PD only raises the limit from 3 amps to 5. It doesn't increase the risk of fire very much at all, even in a counterfeit cable.


Why not just switch to PTP (IEEE 1588)?


Being Polish makes me very proud of the bravery of Witold Pilecki at the same time Russian army did some horrible things during the „saving” of eastern Europe. However if you look at the numbers they also paid the ultimate price in number of casualties: https://vimeo.com/128373915


FileVault does not require a password input at EFI. It decrypts user data in memory after user login


I thought current versions of FileVault encrypt the entire OS and the login screen is just a fancy boot partition?


You're mostly correct.

On a CoreStorage Filevault 2 system, the Recovery HD is used as the boot loader, calling an EFI program named "boot.efi" present on the filesystem.

On a APFS system things are a bit different; the Recovery HD is still used, however this is now a Logical volume presented from the main volume group, with the update to High Sierra a Firmware upgrade was pushed out to all supported systems enabling the EFI to grok APFS.

Edit removed terminal output


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: