Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would think not. By lossy compressing the image you have already lost information necessary to reconstruct it again. The method uses information from all three color channels to reconstruct the image. So all the information is there, the image is just blurry because the color channels are shifted (chromatic aberration).


To extend willvarfar's question, can you shift color channels in a reversible way so as to get compression.

Compare the two process.

1. raw_image -> image_compress(raw_image)

2. raw_image -> shift_color_channels(raw_image) -> image_compress(shift_color_channels(raw_image))

* Thinking out aloud *

Is the 2nd process feasible?

Is it possible that the current image compression algorithms pick up on the aberration patterns which invalidates the need for the 2nd process?

In the case of image compression using wavelet transforms (which many methods use) and if wavelets can pick-up on the aberration patterns, could the hurdle be finding a finite set of wavelet functions that can work for majority of the lenses?


It's not just chromatic aberration. It's also blurry because of distortion from the (very simple) lens. Even high-quality lenses cannot reproduce the image sharply across the entire plane; they're noticeably softer in the corners for example.

In this case their simple lens (akin to a Lensbaby) results in zoom-like blur in the corners and thus more extreme PSFs than the Canon lens, which has more or less the same PSF shape overall, just wider in the corners.

Also it appears that they used 32 wavelengths for computing the PSFs in the simple lens case vs. just three for the Canon lens.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: