Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

@DanBlake

That's a text file, which doesn't use the full range of a byte (0-255), so each character takes less than a byte.



Im happy to reproduce in other formats. The contents are still random which is the crux of the issue here.


The contents are not purely random. For example, I can predict with 100% certainty that any given byte in the file is not the value zero (ASCII NUL).

Try compressing the output of /dev/urandom on your nearest convenient UNIX-like system. If you figure out a way to reliably and significantly compress that, please report back.


My point was never that it was possible 100% of the time. Simply that it was possible some of the time.


On average, you will expand files, not compress them.

Also, the odds of getting finding even a single encrypted file of that length which can be compressed are lower than the odds of winning the lottery three times in a row. Trust me, you didn't just happen to stumble across a working example. You screwed up the test.


There are no real patterns in encrypted data. Only tiny exponentially rare streaks. This means that compressing is still useless, as originally claimed. You're arguing against something that wasn't said.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: