Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So VR is hard enough - to avoid jitter that makes users feel sick you have to respond to a user's head movement, render a new frame with the new information the user should see, respond to any button presses, then draw the frame, in under 14-20ms.

Magic Leap was always a much more difficult problem... they have to respond to a user's head movement, parse the scene the user is looking at (which could be anything), figure out what to draw and where to draw it on the user's environment, then render, all in the same 14-20ms window.

Compounding that they have to do it with a much weaker CPU/GPU/battery than Oculus and friends, which use a phone, or are tethered to a PC with a $1,000 GPU. You wear Magic Leap on your head, no cables.

On its face I was always sort of surprised to hear about the good demos; it always seemed like such a difficult problem, and I know the VR folks are having a tough enough time with it.



That's not the big unsolved problem with augmented reality. Those are all problems VR systems can already solve with enough money and transistors behind them. The AR big problem is displaying dark. You can put bright things on the display, but not dark ones. Microsoft demos their VR systems only in environments with carefully controlled dim lighting. The same is true of Meta. Laster just puts a dimming filter in front of the real world to dim it out so the overlays show up well.

Is there any AR headgear which displays the real world optically and can selectively darken the real world? Magic Leap pretended to do that, but now it appears they can't. You could, of course, do it by focusing the real scene on a focal plane, like a camera, using a monochrome LCD panel as a shutter, and refocusing the scene to infinity. But the optics for that require some length, which means bulky headgear like night vision glasses. Maybe with nonlinear optics or something similarly exotic it might be possible. But if there was a easy way to do this, DoD would be using it for night vision gear.


The AR big problem is displaying dark. You can put bright things on the display, but not dark ones.

There is actually a solution (EDIT: nope, see replies), but it's tricky: If you stack two sheets of polarizing filters on top of each other, then rotate them, you can get them to pass all light at 0 degrees rotation and block all light at 90 degrees rotation. It's like special paper that adjusts brightness depending on how much it's rotated relative to the sheet of paper behind it. https://www.amazon.com/Educational-Innovations-Polarizing-Fi...

So you could imagine cutting a series of circular polarizing filters and using them as "pixels". If you had a grid of 800x600 of these tiny filters, and a way to control them at 60 fps, you'd have a very convincing way of "displaying dark" in real time.

It'd require some difficult R&D to be viable. Controlling 800x600 = 480,000 tiny surfaces at 60fps would take some clever mechanics, to put it mildly. Maybe it won't ever be viable, but at least there's theoretically a way to do this.

A minor problem with this approach is that the polarizing filter may affect the colors behind it. But humans are very good at adapting to a constant color overlay, so it might not be an issue.


The problem with that solution is optical, I believe. It would work if you were able to put such a filter directly on your retina, but when you put it earlier in the path of the light, before images are focused, you cannot selectively block individual pixels as they appear on your retina. As a result, the dark spots will look blurry.

(Also, if the pixels are dense enough I imagine you'll get diffraction.)

Here's is Michael Abrash's better explanation:

>“But wait,” you say (as I did when I realized the problem), “you can just put an LCD screen with the same resolution on the outside of the glasses, and use it to block real-world pixels however you like.” That’s a clever idea, but it doesn’t work. You can’t focus on an LCD screen an inch away (and you wouldn’t want to, anyway, since everything interesting in the real world is more than an inch away), so a pixel at that distance would show up as a translucent blob several degrees across, just as a speck of dirt on your glasses shows up as a blurry circle, not a sharp point. It’s true that you can black out an area of the real world by occluding many pixels, but that black area will have a wide, fuzzy border trailing off around its edges. That could well be useful for improving contrast in specific regions of the screen (behind HUD elements, for example), but it’s of no use when trying to stencil a virtual object into the real world so it appears to fit seamlessly.

http://blogs.valvesoftware.com/abrash/why-you-wont-see-hard-...


What about Near-Eye Light Field Displays[1][2]? From what I've seen those look to have promise in solving some focus problems and some of the problems with how cumbersome most VR/AR displays are. As a bonus, they can correct for prescriptions.

1: https://www.youtube.com/watch?v=uwCwtBxZM7g

2: https://www.youtube.com/watch?v=8hLzESOf8SE


That makes sense. Thank you for the explanation.


The answer is a higher resolution screen plus some clever adaptive optics and software. The problem is that even 8k screens do not come close to required resolution... And you also want fast refresh rate.


"We made black with light. That's not possible, but it was two artists on the team that thought about the problem, and all of our physicists and engineers are like "well you can't make black, it's not possible." but they forgot that what we're doing - you know the whole world that you experience is actually happening in [the brain], and [the brain] can make anything." - Rony Abovitz in 2015.

https://www.youtube.com/watch?v=bmHSIEx69TQ&feature=youtu.be...


That was a nauseatingly obtuse way to say "We're trying to create a standing wave on the retina."

That approach is devastatingly hard, but probably the best way to do it.


No, they're just putting out enough light to override the background, then showing dimmer areas for black. If they have a display with enough intensity and dynamic range, they can pull this off. Eye contrast is local, not absolute, so this works within the dynamic range of the human eye.


No, they're just putting out enough light to override the background, then showing dimmer areas for black

Right, that's how all existing HMD systems work - but generally not with the whole FOV re-rendered so it's not so cut and dry.

Note that such a system doesn't give you black ever. It gives you muddy [insert lens color on the grey spectrum].

The case you describe ends up with what is practically an opaque lens that is replicating the environment that is not virtual. So you might as well just use VR with camera pass through at that point.


How would that be different from an LCD, which is something they've presumably looked at and not used?


I don't know. It's similar. I wonder what the problems are with using it, then?

One idea that comes to mind is that a regular screen leaks light. If you adjust the brightness on your laptop as low as it will go, then display a black image, there's still a massive difference between what you see there vs when the screen is completely powered off. But if you take two sheets of polarizing filter and stick them in front of your screen, you can get almost perfect blackness. That's why I thought it was a different idea, since the difference is so dramatic. You can block almost all the light, whereas a regular screen doesn't seem to be able to get that kind of contrast.


I don't think that level of black is a good thing for AR. If you can't distinguish the augmentation from reality, I'd argue that's a bad thing.


Let me be clearer: Being able to show black is super important for AR. For one thing, it's a pain in the ass to read text on a transparent HMD , because you never know what colors will be behind the letters. You can make some educated guesses, and put up your own background colors for the text, but since everything has opacity, it'll always matter what's physically behind the "hologram".

Yes, yes, it's still not a hologram, but the popularized version of holograms (from things like star wars) is still the best way to think about AR display.

If you can show SOME black, text legibility becomes a lot easier. Everything can look way better, even if the world always shines through enough to see it.

If you can show PURE black, enough to ACTUALLY obscure the world, now you can erase stuff. Like indicator lights, panopticon optics, and people.


Right. Pictures of what people see through the goggles seem to be either carefully posed against dark backgrounds (Meta [1]) or totally fake (Magic Leap[2], Microsoft[3]) It's amazingly difficult to find honest through-the-lens pictures of what the user sees. When you do, they're disappointing.

[1] http://media.bestofmicro.com/V/H/563597/original/Meta-Collab... [2] http://www.roadtovr.com/wp-content/uploads/2015/10/magic-lea... [3] https://winblogs.azureedge.net/devices/2016/02/MSHoloLens_Mi...


And that would necessarily be bad?


I was thinking the exact same thing.

Anyone seen a TFT "filter" mounted on the front of a transparent OLED "backlight"?

Yes I'm sure they must have thought of it. Wonder what the issue is.

[edit] I read the explanation further down now


You mean a TN LCD screen? They work exactly like this, using liquid crystal to change polarisation angle. Pretty much every LCD ever.

It can give pure black if the polarising plates and liquid crystal is accurate enough.


The part that's surprising to me is how instantly popular of a startup it became with so little information. Was this "demo" they gave so damn good that all the investors (some really reputable ones such as Google) started throwing money at it, without doing their research to see how real it was?

Sounds very suspicious to me.


Especially considering the nightmare of a funding climate we have. Those demos must have been truly life altering to warrant the $500m in investment.

Profitable startups with hockey stick like growth can't even raise a couple million...damn.


Why is this post greyed out? Did people downvote it? Is not a reasonable question to ask why capital is wasted?


Perhaps growth just isn't a great metric for future growth. For example, tons of food-delivery and pet-product startups have exponential growth early on and evaporate a bit later when the product ceases to be sold at cost and a newer competitor appears.


>Is there any AR headgear which displays the real world optically and can selectively darken the real world?

They've said they have a solution, and it's more optical illusion than optics. They don't darken the real world, but will make you perceive something similar.


Why can't you "just" use a camera and project the light with "standard" vr technology?


That's one approach - just add a camera, mix the camera image with the graphics, and feed it to a VR headset. Works, but it's more intrusive for the user than the AR enthusiasts want.


The main issue with this approach is that the video pipeline adds dozens of milliseconds of latency, and it becomes awkward to interact with the physical environment. You couldn't play AR pong for example.


> The AR big problem is displaying dark

What about those windows where, at a press of a button, they turn relatively opaque?


They are called LCD screens. The problem is having a high resolution one. (focus problems can be solved with adaptive optics and measuring via IR laser refraction index of the lens)


Michael Abrash wrote about this a while back and it made me suspect that Magic Leap wasn't where they were pretending to be.

http://blogs.valvesoftware.com/abrash/why-you-wont-see-hard-...

Two of the major unsolved problems he talks about are latency and the ability to draw black - I would be surprised if magic leap had solved both of these alone and in secret.

Their vapid press releases didn't inspire confidence either.


So VR is hard enough - to avoid jitter that makes users feel sick you have to respond to a user's head movement, render a new frame with the new information the user should see, respond to any button presses, then draw the frame, in under 14-20ms.

A bit of a tangent, but: for some people, it's impossible for VR to avoid making them feel sick. It's fundamental to them wearing a VR headset rather than a technical challenge to be overcome. It's related to the fact that VR headsets can't project photons from arbitrary positions with arbitrary angles towards your eyes (i.e. a screen is planar, but the real world is a 3D volume). Turns out, evolution has ensured our bodies are very good at determining the difference between a screen's projection and the real world, resulting in a sick feeling when there's a mismatch.

I think that when people accept it's inevitable some subset of users will get sick, the VR ecosystem will grow at a faster rate.


Technically this would be possible to overcome with a light field display. Light field displays are currently very far off from becoming commercially viable.


Well, wether to product is viable or not, that's what Magic Leap is trying to deliver.


You could approximate this using a standard display if you could dynamically track and respond to focal depth.

That might be harder than getting a functional light field display, though.


Much easier, we have the tech in ophthalmology and adaptive lenses that are used in cellphone cameras.


Out of curiosity, what would the tech stack look like?

What I was picturing was: - LCD display - Patterned IR image projected onto eye - Camera to read IR image on the back of retina, to figure out current focal length of cornea - Realtime chip to adjust LCD display based on focal depth, to simulate light field image.

... The real issue is that response latency would have to be <10ms probably to avoid being disorienting.


Isn't it more likely to be lag and inner ear related, not related to light angles? Why do you think it's light angles?


It's hotly debated, and the literature is often in conflict as to the exact cause of simulator sickness. Regardless of the exact reason, sickness of roughly half the population appears to be somehow fundamental to VR.

Here's some interesting reading:

https://en.wikipedia.org/wiki/Simulator_sickness

https://news.ycombinator.com/item?id=5265985

> In a study conducted by U.S. Army Research Institute for the Behavioral and Social Sciences in a report published May 1995 titled "Technical Report 1027 - Simulator Sickness in Virtual Environments", out of 742 pilot exposures from 11 military flight simulators, "approximately half of the pilots (334) reported post-effects of some kind: 250 (34%) reported that symptoms dissipated in less than 1 hour, 44 (6%) reported that symptoms lasted longer than 4 hours, and 28 (4%) reported that symptoms lasted longer than 6 hours. There were also 4 (1%) reported cases of spontaneously occurring flashbacks."

Simulator Sickness in Virtual Environments (PDF): http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA295861

Some interesting quotes from the report:

Expected incidence and severity of simulator sickness in virtual environments

> In an analysis of data from 10 U.S. Navy and Marine Corps flight simulators, Kennedy, Lilienthal, Berbaum, Baltzley, and McCauley (1989) found that approximately 20% to 40% of military pilots indicated at least one symptom following simulator exposure. McCauley and Sharkey (1992) pointed out that pilots tend to be less susceptible to motion sickness than the general population due to a self-selection process based on their resistance to motion sickness. Since VE technologies will be aimed at a more general population, such selection against sickness may not occur. Thus, McCauley and Sharkey suggested that sickness may be more common in virtual environments than in simulators.

Findings

> Although there is debate as to the exact cause or causes of simulator sickness, a primary suspected cause is inconsistent information about body orientation and motion received by the different senses, known as the cue conflict theory. For example, the visual system may perceive that the body is moving rapidly, while the vestibular system perceives that the body is stationary. Inconsistent, non-natural information within a single sense has also been prominent among suggested causes.

> Although a large contingent of researchers believe the cue conflict theory explains simulator sickness, an alternative theory was reviewed as well. Forty factors shown or believed to influence the occurrence or severity of simulator sickness were identified. Future research is proposed.

Vection

> One phenomenon closely involved with simulator sickness is that of illusory self-motion, known as vection. Kennedy et a1. (1988) stated that visual representations of motion have been shown to affect the vestibular system. Thus, they conclude that the motion patterns represented in the visual displays of simulators may exert strong influences on the vestibular system. Kennedy, Berbaum, et al. (1993) stated that the impression of vection produced in a simulator determines both the realism of the simulator experience and how much the simulator promotes sickness. They suggested that the most basic level of realism is determined by the strength of vection induced by a stimulus. For a stimulus which produces a strong sense of vection, correspondence between the simulated and real-world stimuli determines whether or not the stimulus leads to sickness. Displays which produce strong vestibular effects are likely to produce the most simulator sickness (Kennedy, et al., 1988). Thus, Hettinger et al. (1990) hypothesized that vection must be experienced before sickness can occur in fixed-base simulators.

> While viewing each of three 15-minute motion displays, subjects rated the strength of experienced feelings of vection using a potentiometer. In addition, before the first display and after each of the three displays, the subjects completed a questionnaire which addressed symptoms of simulator sickness. Of the 15 subjects, 10 were classified as sick, based on their questionnaire score. As for vection, subjects tended to report either a great deal of vection or none at all. In relating vection to sickness, it was found that of the 5 subjects who reported no vection, only 1 became sick; of the remaining 10 subjects who had experienced vection, 8 became sick. Based on their results, Hettinger et al. concluded that visual displays that produce vection are more likely to produce simulator sickness. It is also likely that individuals who are prone to experience vection may be prone to experience sickness. It was mentioned earlier in this report that a wider field- of-view produces more vection and, thus, is believed to increase the incidence and severity of sickness (Kennedy et al., 1989). Anderson and Braunstein (1985), however, induced vection using only a small portion of the central visual field and 30% of their subjects experienced motion sickness.


Take these studies with a grain of salt though, as display technology, IMUs, external tracking and more importantly computing power are magnitudes better than what was available in 1995.


Certainly good points. It also might be a mistake to discount studies performed under controlled conditions, though. If the cause of simulator sickness is illusory motion, then the relatively crude technologies available in 1995 may have been sufficient to induce the same effect we're observing today.


I've never met anyone who gets sick based entirely off vergence-accommodation conflict. I've certainly talked to people who claimed it was an issue but would happily go to a 3d film so I assume they were misattributing the cause of their sickness. Maybe having the peripherals line up for 3d films is enough.

I'd be interested to hear some testimonials on how it affects people though. One of the interesting things about VR is finding out the myriad of different ways people are sensitive to small phenomena.


I'll chime in on this:

I own a Vive, and get sick using that after half an hour maybe (even though it's not too bad and I can keep using it), and it takes me a few hours to "reset" afterwards.

I don't get sick in 3D movies (but I don't enjoy them, and never choose to see one in 3D if there's an alternative.

The reason I don't get sick in 3D movies I assume is because it's still just a screen, and any head movement I make is still 100% "responsive", with no delay or mismatch.

Edit: Sorry, I read your response a bit lazily. Yes, it's not related to vergence-accommodation, at least not exclusively.


I worked on Hololens stuff at Microsoft. It does everything you describe, and IMO does it really well. It's fairly light and is wireless. The tracking is excellent. A lot of the scene parsing is done in special hardware, with low latency. It updates a persistent model of the environment, so it isn't rebuilding everything every frame.


Any particular reason you stopped working on it?


It was a contract position.


He probably moved to another secretive project in the hardware division. Thats what my friend did.


You don't need to parse the environment from scratch 60 times every second. As long as you get head tracking right you can just focus on what's moving and assume it will keep moving the same way. Further the demos all seem to be around a fairly static environmenment. Remeber you don't need to render the background so a simplified internal model works fine with AR.

If it works near a closeline blowing in the wind that's really hard and impressive.


The problem is even worse. In VR you're competing on latency with human muscle feedback and what your vestibular system is telling you.

In AR, you're competing with on latency with the human visual system that you're trying to augment, which is a race you can't win.

The only thing you can do is try to be very fast (<= 10 ms) so the latency becomes unnoticeable. Unfortunately right now this isn't possible unless you custom-engineer everything in your vision path optimized for latency. Fun R&D project but enormously time- and capital intensive with no guarantees for success.


AR though leaves much less of a problem with simulator sickness because normal vestibular system and motion perception is used elsewhere.

VR is a much tougher nut to crack.


A trick i've heard at least one of the VR devices is doing isn't re-rendering the entire scene but since most of the movement is just head tracking, it renders a larger buffer and scrolls around in that buffer between frames to get a quicker response.


That only works for very, very minimal differences. It's the Oculus that does this. It guesses where your head will be, renders the scene at that location, and distorts the image slightly to match its guess vs reality. It also introduces a bit of softness, but seems to work pretty well. But it does re-render the whole image each frame.


I wonder if it does it to create intra-frame changes from rotation and/or motion of the head. If it does then that could help a lot with the experience


>It's the Oculus that does this.

And the PS no? I think that all the VR system will (optionally) do this to reduce the processing power needed.


I have no idea about the PSVR. It's probably the VR device I've heard the least about (technically). I own one, and it works pretty well, but I don't know much about it.

From my understanding this won't do anything to reduce power processing, it's all about reducing latency. That final transform happens at the last moment, with the most up-to-date head tracking information. I kind of figured it happened in the headset hardware itself in order to reduce the latency as much as possible.


The technique is called asynchronous timewarp

https://developer3.oculus.com/blog/asynchronous-timewarp-exa...


That would only work if the scene was a single flat surface and you were only moving in parallel to it. If you were doing that you may as well not use VR and just look at a normal monitor. Otherwise it wouldn't match your motion, which is the problem you're trying to solve in the first place!


Actually moving parallel to it would make the artifacts worse as it'd distort the projection based on distance (i.e. parallax). With a rotation if you're rendering it onto a sphere, doing that rotation will have a very little amount of distortion from it comparatively. It still needs to be rerendered to be correct but this can allow for better intra-frame changes (say game runs at 60 hz, and you need 120hz) with less computation. As a sibling (to you) commenter pointed out, this will make the image slightly "fuzzy" or softer because you end up moving the image on a fractional amount of pixels. As I understand it though, that extra intra-frame can mean the difference between some people getting a VR migraine or not.


The latency / motion sickness issues probably aren't as bad, since you still see the surrounding environment to get a stable bearing. The display tech sounds very hard, though.


Hello? Hololens did it already and tether-free. Here is my mixed reality capture tape taken on HoloLens. https://m.youtube.com/watch?v=Av3Fdx5RnUI


I could never really figure out how the variable focus lightfield worked.

I sort of assumed it was like a stack of images in the Z plane. So not only are they doing everything you mentioned, they are also rendering the 3D scene in slices?


Isn't the head-tracking separate from the graphics rendering? I mean, you could just render onto an area that's somewhat larger than your fov, and select some area every couple of ms based on the head movement.


You want to integrate rendered objects onto the physical world, so you have to know exactly the pose these objects should be rendered at as if seen from your point of view. The objects transforms are piloted by the head tracking. Usually with pose prediction thrown in the mix to squash latency.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: