Hacker Newsnew | past | comments | ask | show | jobs | submit | cmcd's commentslogin

> "We want to start by apologizing for the confusion we have caused by incorrectly suggesting that Zoom meetings were capable of using end-to-end encryption. Zoom has always strived to use encryption to protect content in as many scenarios as possible, and in that spirit, we used the term end-to-end encryption."

Excerpt from their previous release above, only a few hours earlier.

Glad to hear they are starting to make improvements but waiting for public backlash to fix issues is a bad sign.


You are only storing the hash of the password so there is no reason to have arbitrary length constraints like 32. Why not make it several thousand characters?


There is actually a reason (though 32 seems pretty short. Something like 256 is probably more reasonable).

Several thousand characters (or worse, unlimited length) opens up your attack to a form of DDoS where you can exploit the fact that password hashing is a computationally heavy operation. See here: https://arstechnica.com/information-technology/2013/09/long-...

> Django does not impose any maximum on the length of the plaintext password, meaning that an attacker can simply submit arbitrarily large—and guaranteed-to-fail—passwords, forcing a server running Django to perform the resulting expensive hash computation in an attempt to check the password. A password one megabyte in size, for example, will require roughly one minute of computation to check when using the PBKDF2 hasher. This allows for denial-of-service attacks through repeated submission of large passwords, tying up server resources in the expensive computation of the corresponding hashes.


55 would make sense if they are using bcrypt. But since it's 32, they're probably not actually securely storing (password hashing) them at all.


The real problem there is using an algorithm that gets slower on longer passwords.

There's no need to have a cap bigger than a kilobyte though.


Is there a cryptographic hash algorithm that doesn't? It seems like that would make it non-cryptographic (since you will need to read each byte at least once).


Reading each byte once only takes a few microseconds. That's not the issue.

What you need is for the slow core of the algorithm to be fixed-speed.

Either by only reading the input bytes during initialization, or by only feeding a fixed number of input bytes into the core during each round.


Isn't this mitigated by hashing the password on the client (in addition to hashing it on the server).


But couldn't an attacker just skip the client-side hashing?


Then the server will reject it for being the wrong size.


The GP suggested it would also be hashed by the server - presumably because you can't trust client input.


You would hash it on the server because you don't want to turn the 'hash' into a plain-text password.

But instead of the server accepting an arbitrary string, it only accepts hexadecimal or base64 strings of a specific length. Which solves the problem.


I'm not quite understanding the protocol here.

If the client sends only a hex/base64 string, how can the server trust that it's the result of a password being fed to a KDF?


It doesn't matter.

The threat model is: Password is too long, lots of CPU is wasted, denial of service.

By only accepting strings of a certain length, the threat is defeated.

The client could send an intentionally bad password even if they weren't lying. If they lie, only the client is harmed, and in a non-new way.

So this scheme has one notable upside, and no notable downside.

There are better solutions, but this one is valid.


Hmm, I can see some benefits to the scheme, such as using the client's CPU cycles, and the plaintext password never having to be sent to my servers.

Maybe it's just that it's not the norm, but I'm still unsure I'd actually use this scheme.

As the owner/maintainer of a service, I want to be in control and know that my user's credentials are secure - there may even be legal obligations here in some countries.

TBH, my preferred solution here is never to silently truncate passwords, and just to set a "sensible" limit on password length, e.g. 256 characters. Yes, it's still an arbitrary limit, but it should be long enough to cover 99.9999% of users.


> As the owner/maintainer of a service, I want to be in control and know that my user's credentials are secure - there may even be legal obligations here in some countries.

The code doing the client-side hashing is just as secure as the rest of the client interface. You don't compromise anything by doing it.

Still, it's easier to do the extra hash locally on the server if you need it.


How?


Yep. You do want a limit though as you don’t want random attackers to DOS your login server by feeding megabytes you your kdf.


> The middle class already pays most of the taxes

I'm not so sure about that. The top 1% pays 37% of federal income taxes. Meanwhile, the bottom 50% paid only 3% and 44% of paid nothing at all. Depending on the source you use the middle class in America is typically considered to top out at incomes between 100k and 122k. In either case the middle class pays less than half of income taxes. Then looking at the bigger picture we see income taxes only account for 60% of all taxes.

Sources: https://taxfoundation.org/summary-latest-federal-income-tax-... https://www.irs.gov/statistics/soi-tax-stats-irs-data-book


This can be misleading because it doesn't account for wealth concentration. When you realize HOW MUCH of the wealth the 1% control, 37% seems small.


That doesn't make the statement misleading. We tax income. The middle class doesn't pay the majority of taxes.

You may think there should be a wealth tax but the statement isn't misleading. The prior statement was inaccurate.


An example project using Pep. Most people are going to be skeptical that this can be done with a single line of code.


I'm not against DOH but there are definitely some downsides. For example, your token does not get reset on network changes. This means your DNS provider can track your DNS requests across networks, including VPNs.

With normal DNS anyone in the request chain can see a stream of DNS requests but there is no context. By the time the request is one or two hops from you it will be interwoven with tens of thousands of other requests making it impossible to know which one came from who.

With DOH the DNS provider will have a unique identifier to correlate requests back to a specific system/user. Google offers one of the most used DNS services, with DOH they will be able to track all DNS requests you make even if you turn on a VPN.


I am sure AWS, OCI, GCP, etc. all host scam websites with varying degrees of removal efficiency. What cases are you referring to specifically? Did they state they were not going to take these sites down or what was the context that you object to?


Pretty sure that is illegal, wouldn't be a major complication to add a metal detector outside voting booths.


That would be considered voter suppression by lots of people. Also very expensive.


Why does MacOS only "sort of" have type-to-search?


It seems like an inaccurate statement, I would argue OSX was the first one to do type-to-search properly... Not a huge Ubuntu user admittedly.


AFAIK, it only searches through your apps, not files.


Spotlight is the OG OS integrated full text search, originating 2004/5:

https://en.wikipedia.org/wiki/Spotlight_(software)


Spotlight has not been useful to me in over half a decade. It never finds the files I need, despite the fact that I know they're there, I know I typed them right, and I know its settings hasn't hidden or ignored them. And instead it almost always shows useless files in completely obscure parts of the OS that shouldn't even show up in Spotlight. I turned it off like a year ago and haven't regretted it since.


Maybe your md index is borked. Look into the mdutil (maintenance) and mdfind (search) commands.


Interesting- I find it very useful. It even searches my email that is locally cached. Lots of apps create hooks into it. Major part of the ecosystem that makes MacOS what it is.

Not sure why it's borked for you...


Spotlight on macOS most definitely searches files, much of the system is in fact built on file system metadata attributes. (It was original designed by Dominic Giampaolo, who also architected BeOS's file system.)


Recently I discovered that Xcode uses Spotlight in an interesting way — when you want to convert crash logs from your apps into readable stack traces, you only need to place the relevant symbol files anywhere on the disk where Spotlight can find them. No need to import them directly into the IDE.


Not just FS metadata, it can index ID3 tags, EXIF, and whatnot, and it’s fairly extensible, although underused.

Kind of like the underpinning concept of AppleScript and app dictionaries, awesome tech and concepts, but it’s sad to see the promise of the extensible, composable desktop slowly dying.


> It was original designed by Dominic Giampaolo

This is the first that I'm hearing of this. That is absolutely sick.


I just tried searching for a file with Spotlight and it found the file by name.


Like what? If you are a high productivity engineer you will have excellent health care through your employer.


What if you're a high productivity engineer working on your own startup?


Then you should be in the USA and not the EU if you're hoping for VC funding.


I didn't create a twitter account but my information could have been leaked via this process.


Is there some law against them collecting your information from your friends without your consent? I'm not a lawyer, just an observer of how these sort of things regularly go, and I'm going to guess that what they did here was 100% legal.

Obviously this is morally abhorrent, but in the US the laws are written to protect large corporations like Twitter, not their victims.


That's not damages though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: