Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My god! Someone finally discovered the "ENHANCE" filter!

Well, aside from the problem that information can't be created from thin air. You can fix certain lens errors, but you cannot extract details that aren't there on the original.

The most obvious consequence is that you can't use this technique to extract more megapixels than the sensor has.

Less obvious limitations might be the sensibility to noise; all these samples have been taken in bright light, but if your crappy lens produces a very noisy image in low light, this method won't fix it.

Furthermore, the numerical aperture (NA) of the lens defines the highest possible resolution. Even with this method you can't get a higher resolution than wavelength/NA.

Unfortunately, there's no way around the principle "Garbage in, garbage out." Lensmakers rejoice, your business wasn't made obsolete after all!

Nevertheless, I can see exciting applications for this method; one thing that comes to mind are improving the pictures taken by photographic scanners used for digitizing old books.



This is not about reducing noise it is about fixing aberration and distortion.

So it even works on noisy images where you get a better quality image of the (still) noisy image.

They never mention that they extract details that aren't there. They just present details in a way that humans perceive as "sharper images".

You note about scanners is interesting. There once was a project that converted scanned disk record into MP3s. You had to scan the disk several times because of the scanners aberration (http://www.cs.huji.ac.il/~springer/DigitalNeedle/index.html).


You are right. I made my statements in reference to claims in the article like "This technique (...) may some day provide a software alternative for those who can’t afford high-end glass". The technique can definitely improve the image quality of a given lens, but it will never allow a "cheap" lens to replace an expensive lens.


Maybe I'm missing something, but most of your comments seem to be focused on limitations of the sensor, not on the lens itself. (It's hard for me to picture how a lens would behave differently at low light intensity than at high intensity: the light rays all bend the same way regardless, right?) Your point about diffraction-limited resolution is well taken, but for two lenses with the same aperture projecting onto identical sensors it sounds like this technique could make low-end products more competitive with high-end ones. (Let me know if I've missed your point.)


This method tries to correct specific lens errors. To do this, you need very good intensity resolution. If you have poor intensity resolution, information is lost and the lens errors cannot be corrected anymore. In low light, the signal to noise ratio will lead to poor intensity resolution, and the lens errors become irreversible.

A better sensor will only get you so far; since light is quantized (a ray consists of individual photons), there are physical limits to the intensity resolution possible at low light.

Once information is lost, there is no way to recover it. And that's why you just can't make up for a crappy lens with sofware.


Wouldn't the sensor be a little more important in this case than the lens?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: