I keep thinking there will be a significant reduction in complexity there. The intensity of a pixel in a hologram is essentially an integral over all surfaces visible from that point. So imagine a rather complex formula applied for each surface for each pixel. Then imagine holographic bounding boxes that compress complex geometry into a few holograms of what's inside. This would reduce the n^4 back down, but the resolution required for holograms is still very very high. But we could use fancy GPUs to evaluate the integrals.
You can compress most Earth light fields really well. That might be what you're intuiting. There's a ton of redundancy due to the mostly opaque nature of reality.
But compressing a light field and projecting one into an eye are totally different things. Your display needs to be capable of displaying all 3n^4 possible intensities at different times. Depending on the scene and where you're looking, you can get away with showing only a smaller handful of those (some megapixels) but the display still needs to be capable of displaying them all.
If your display is just fixed LEDs behind a microlens array then you still need n^4 resolution. Like, a megapixel per pixel. Most of the pixels can be off at any given time, but you'll need them all to display arbitrary scenes.
It's actually worse than that, for a near-eye display: unless you know where the person is focusing, you actually don't know where the redundancy is, so you have to draw the entire 4D raster. (If you know their focus distance, you can probably just draw a 2D image on the retina and be done -- you get that 4D->2D projection.)
Otherwise, the existence of those additional rays is what allows for focus accommodation: as you focus in different planes, it 'shuffles' the light around, to create sharp edges (where similar light rays line up) or blurry foreground/background (similar light rays strike different portions of the retina -- and if any are missing, there is a 'hole' in the blur disk).
This is one of the reasons why I'm actually quite bullish on handheld VR/AR. It has none of the optical challenges of a headset. You lose one hand, and give up some immersion, but in exchange you get all of the other benefits of VR/AR without any of the optical challenges and lots more performance headroom.
I think Google is on the right path with Tango-first. Didn't used to think so.
I propose the display is a proper hologram, and hence just a really high resolution coherent display. No, it doesn't exist yet that I'm aware of, but it'd be really high resolution 2D.
I keep thinking there will be a significant reduction in complexity there. The intensity of a pixel in a hologram is essentially an integral over all surfaces visible from that point. So imagine a rather complex formula applied for each surface for each pixel. Then imagine holographic bounding boxes that compress complex geometry into a few holograms of what's inside. This would reduce the n^4 back down, but the resolution required for holograms is still very very high. But we could use fancy GPUs to evaluate the integrals.
Just hand-waving thinking here...