Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> That's why we laugh at the "zoom -> 9 pixels -> enhance -> clear picture -> zoom ->.." trope in movies..

> AI does not change this..

It kind of changes it in some cases. The information might be there, but superhuman pattern recognition might be needed to extract it.

And of course, in case factuality doesn't matter, the missing information can be generated and filled in. This obviously doesn't work when you are looking for a terrorist's license plate or when you want to watch an original performance in a movie.



"Information" in terms of, what does this thing look like -- could maybe be determined from other shots -- yes, sure.

But I think "information" in the context of film here refers to the indexical mark of light upon the image sensor, and in that case no. If it's not recorded, you can't extract it. And whatever you do put there is of little interest to the film buff to whom "image quality" means a more faithful reproduction of the negative that was seen in theaters.


I'm talking in the context of the quoted statement

>"zoom -> 9 pixels -> enhance -> clear picture -> zoom ->.." trope in movies

You can have e.g. a picture of a blurry piece of paper, that humans can't read, but I imagine software might be able to read it (with reasonable accuracy and consistency). The information might be recorded, but hidden.


That’s not really true though. Unless you’re talking about a trivial distortion, like inverting the colors, there’s always some loss of information. In the case of blurry text, we’re still making an assumption that the paper holds some form of human writing, and not literally a blurry pattern. Maybe there’s external context that confirms this. But solely based on the image itself, you can’t know this. It’s basically a hash function; there are multiple possible “source” images of what’s on the paper that may end up looking exactly the same on the blurry/low-res/degraded etc output video. Human readable text is likely the most plausible but it’s not 100%.

You can’t reverse an operation that loses information with absolute certainty unless you are using other factors to constrain the possible inputs.


> You can’t reverse an operation that loses information with absolute certainty unless you are using other factors to constrain the possible inputs.

Ok, but there are other factors you can use to constrain the possible inputs.

For example with license plates. You know the possible letters and how they appear when "blurred" so you can zoom-enhance them.


> For example with license plates. You know the possible letters and how they appear when "blurred" so you can zoom-enhance them.

And even STILL, you can't be SURE.

Let's imagine some license-plate system optimized to give the biggest hamming distance for visual recognision (for example, no O and o in, only one of them, and no i and 1, only one of them, and so on) to make it as good as possible.

Now, you take some blurry picture of a license-plate and ask the ai to figure out which it is, well, one of the symbols are beyond the threshold of what can be determined, and the ai applies whatever it's learned to conclude (correctly) that the only allowed symbol is X.. Now, thing is, the license-plate was a fake, and the unrecoverable symbol didn't conform to the rules of, it was actually a 1 printed there, but the AI tells it's an 'I' since that's the only allowed symbol.. It just made up stuff that was plausible..

You cannot extract what's not there. You can guess, you can come up with things that _COULD_ be there, but it makes no difference, it's not there.. It's the same with colorized vintage videos, we can argue that it'd not be wrong to assume this jacket was brown since we have lots of data on that model, but we _CAN_NOT_ know if that particular jacket was indeed, brown, it might have been any other color that made the same impression on the monochrome film. The information is _GONE_.


>And even STILL, you can't be SURE.

That's why I said "with reasonable accuracy and consistency". Human can't be SURE either. Nothing is ever SURE if we want to stretch it to absurdum.

My entire point is that computers can be better than people at a given visual recognition task. Therefore we might discover that some information is present in the data even though we previously thought that information was not recorded.

That's literally the entire argument. I'm not sure what you are opposing.


I tend to disagree, even with superhuman pattern matching, what makes a frame unique is everything in it which does NOT follow the pattern, the way the grain is distributed, the nth order reflections and shadows, is what makes it what it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: