Can we realistically create a 3-D visual field?
January 14, 2022 9:23 AM Subscribe
In the real world, human eyes shift focus when they move between looking at something close and something far away. Is there any image-generating technology that can mimic this real-world requirement, requiring your eyes to distance-focus on distant virtual objects and close-focus on near virtual objects?
My car's rear-view mirror has two settings: setting 1 uses an actual old-fashioned mirror. Setting 2 displays a video image relayed from a camera on the back of the car.
The first time I flipped from setting 1 to setting 2 I found it jarring, even painful. I was still looking at the same scene, but my eyes had to shift to "near focus" to see the video screen correctly even though the objects in it were far away. In contrast, when I used the actual mirror, my eyes focused at the distance of the physical objects I was observing. This problem hadn't occurred to me. I didn't like the feel of using the video screen because of this focus issue, and so I don't use it.
How do VR goggles handle this problem? Do the users' eyes have to shift focus when moving from objects that are virtually near to objects that are virtually far? I would guess that they don't; that instead, eyes would always use near focus. But doesn't that make the experience feel weird and unnatural? Do they expect people to just get used to it? And do people get used to it?
Is there any technology that can recreate light patterns that require distance focus? Do holograms do this?
In general, I'm wondering how people who work on this stuff talk about it and think about it. I expect there is actual terminology and it's a known field of study, but I've never seen it discussed. I'd think there'd be a lot of interesting design and usability issues that would come up as a result.
My car's rear-view mirror has two settings: setting 1 uses an actual old-fashioned mirror. Setting 2 displays a video image relayed from a camera on the back of the car.
The first time I flipped from setting 1 to setting 2 I found it jarring, even painful. I was still looking at the same scene, but my eyes had to shift to "near focus" to see the video screen correctly even though the objects in it were far away. In contrast, when I used the actual mirror, my eyes focused at the distance of the physical objects I was observing. This problem hadn't occurred to me. I didn't like the feel of using the video screen because of this focus issue, and so I don't use it.
How do VR goggles handle this problem? Do the users' eyes have to shift focus when moving from objects that are virtually near to objects that are virtually far? I would guess that they don't; that instead, eyes would always use near focus. But doesn't that make the experience feel weird and unnatural? Do they expect people to just get used to it? And do people get used to it?
Is there any technology that can recreate light patterns that require distance focus? Do holograms do this?
In general, I'm wondering how people who work on this stuff talk about it and think about it. I expect there is actual terminology and it's a known field of study, but I've never seen it discussed. I'd think there'd be a lot of interesting design and usability issues that would come up as a result.
VR headsets (I will not call them goggles) currently use a fixed focus. E.G. the Quest and Quest 2 have an about 4 foot focal distance. I think most VR headsets use something closer to 6 feet. Some people have convergence issues, where their brain tries to focus the eyes for something perceived as being closer than 4 feet away, and things go blurry. I believe that's part of the reason that they used 4 feet for the Quest lineup; to limit the closer convergence issue, and fewer people have problems with further distance.
Everything being "in focus" might cause some uncanny valley issues for VR; but currently most devices are so low as far as pixels per degree, compared to what the human eye can perceive that it already all looks pretty fake; even the best stuff like Half Life Alyx, or 10k 180 degree video. The Varjo Aero has abut 35 pixels per degree, while most humans can perceive things around 60 pixels per degree. Quest 2 is about 20 pixels per degree.
There is the concept varifocal optics. But currently, I don't believe that there are any consumer headsets on the market with it due to cost.
posted by nobeagle at 9:30 AM on January 14, 2022 [3 favorites]
Everything being "in focus" might cause some uncanny valley issues for VR; but currently most devices are so low as far as pixels per degree, compared to what the human eye can perceive that it already all looks pretty fake; even the best stuff like Half Life Alyx, or 10k 180 degree video. The Varjo Aero has abut 35 pixels per degree, while most humans can perceive things around 60 pixels per degree. Quest 2 is about 20 pixels per degree.
There is the concept varifocal optics. But currently, I don't believe that there are any consumer headsets on the market with it due to cost.
posted by nobeagle at 9:30 AM on January 14, 2022 [3 favorites]
In general you want to look up “light fields” and the 4d plenoptic function (radiance parameterized as position + direction). Varifocal lenses are one way to get there. There’s also been research into various static optical systems to achieve the same effect. There’s a great nvidia demo from SIGGRAPH 13 using micro lenses on a display to create a light field in an HMD. The main challenge with this is physics — making pixels small enough and tiny optics clear enough to generate enough rays to get real accommodation (you need enough rays entering the eye, and there are trade offs in spatial and angular resolution). Something like Looking Glass is on the path, but the rays aren’t dense enough to drive accommodation.
posted by Alterscape at 10:02 AM on January 14, 2022 [2 favorites]
posted by Alterscape at 10:02 AM on January 14, 2022 [2 favorites]
> But doesn't that make the experience feel weird and unnatural? Do they expect people to just get used to it? And do people get used to it?
Just as far as what people get used to, there's a fascinating presentation from Michael Abrash back in 2014 that explores what set of things have to go right for people to feel "presence," a feeling that they're actually in the place shown in VR. He lists specific things like latency, movement tracking, resolution, etc., but the underlying theme is that this is still a thing we're learning about human brains as we go along. And we can only really learn by building bad things and seeing how they work. When none of the things are right, all we know is that the illusion isn't convincing. But when you make one thing much better (like improving tracking and latency, so your head moves better in the virtual space), then other things that were bothering you might go away (like resolution or focus problems). Or maybe they get worse, because now they stand out as the source of unreality. Or maybe they get better for some people and worse for others because brains are like that. Each engineering problem that we manage to solve -- and make cheap, so lots of people can experience it -- teaches us new things about how our brains work, and thus new things about what's important or not to our sense of reality.
posted by john hadron collider at 10:50 AM on January 14, 2022 [4 favorites]
Just as far as what people get used to, there's a fascinating presentation from Michael Abrash back in 2014 that explores what set of things have to go right for people to feel "presence," a feeling that they're actually in the place shown in VR. He lists specific things like latency, movement tracking, resolution, etc., but the underlying theme is that this is still a thing we're learning about human brains as we go along. And we can only really learn by building bad things and seeing how they work. When none of the things are right, all we know is that the illusion isn't convincing. But when you make one thing much better (like improving tracking and latency, so your head moves better in the virtual space), then other things that were bothering you might go away (like resolution or focus problems). Or maybe they get worse, because now they stand out as the source of unreality. Or maybe they get better for some people and worse for others because brains are like that. Each engineering problem that we manage to solve -- and make cheap, so lots of people can experience it -- teaches us new things about how our brains work, and thus new things about what's important or not to our sense of reality.
posted by john hadron collider at 10:50 AM on January 14, 2022 [4 favorites]
After sleeping on it, a couple of more thoughts:
1) Current non-varifocal HMDs are focused relatively far from the viewer. It's less like looking at a monitor a tens of inches from your face, and more like looking at a scene off in the far field. The lack of focus cues is actually weirder for things in the near field (for me, anyway).
2) I made an error, before. In general the plenoptic function is 5d (xyz + angles) but if you're thinking about it in the context of a display, you end up thinking a lot about 2d position on the screen surface.
One important thing to add here is that screen + parallax barriers and screen + lens approaches can't (without more hardware) replicate something that a hologram does, easily: the phase of the light. I don't have the physics to explain this, but it's one thing that separates a "real" hologram from all of the options, and makes the rendering even more challenging (you don't just need position + direction, you also need phase).
posted by Alterscape at 11:03 AM on January 15, 2022 [1 favorite]
1) Current non-varifocal HMDs are focused relatively far from the viewer. It's less like looking at a monitor a tens of inches from your face, and more like looking at a scene off in the far field. The lack of focus cues is actually weirder for things in the near field (for me, anyway).
2) I made an error, before. In general the plenoptic function is 5d (xyz + angles) but if you're thinking about it in the context of a display, you end up thinking a lot about 2d position on the screen surface.
One important thing to add here is that screen + parallax barriers and screen + lens approaches can't (without more hardware) replicate something that a hologram does, easily: the phase of the light. I don't have the physics to explain this, but it's one thing that separates a "real" hologram from all of the options, and makes the rendering even more challenging (you don't just need position + direction, you also need phase).
posted by Alterscape at 11:03 AM on January 15, 2022 [1 favorite]
« Older Traveling to Costa Rica in early February: crazy... | Moving across the country--what am I forgetting? Newer »
This thread is closed to new comments.
For recreating light patterns that require distance focus, light field displays might do this, but I'm not sure. They can present different views for different eyes, but I'm not sure if the focal plane can change.
posted by wemayfreeze at 9:30 AM on January 14, 2022 [3 favorites]