The contents are not purely random. For example, I can predict with 100% certainty that any given byte in the file is not the value zero (ASCII NUL).
Try compressing the output of /dev/urandom on your nearest convenient UNIX-like system. If you figure out a way to reliably and significantly compress that, please report back.
On average, you will expand files, not compress them.
Also, the odds of getting finding even a single encrypted file of that length which can be compressed are lower than the odds of winning the lottery three times in a row. Trust me, you didn't just happen to stumble across a working example. You screwed up the test.
There are no real patterns in encrypted data. Only tiny exponentially rare streaks. This means that compressing is still useless, as originally claimed. You're arguing against something that wasn't said.
That's a text file, which doesn't use the full range of a byte (0-255), so each character takes less than a byte.