By: Lina Jamis, 2nd year student in the Anatomy Graduate Program
The promise of virtual reality has always been an enticing one—slip on this headset and escape to a new place, without ever stepping foot outside of the room.
It’s an experience so unusual, and yet so familiar, as it hijacks our own senses to provide the qualities we might find in reality, but within the confines of the mind. Not only can virtual reality (VR) serve as a powerful medium for gaming and storytelling, but it may ultimately give us further insight into sensorimotor neuroscience and how to use this knowledge to create visually convincing worlds.
Intended for gamers, Oculus Rift emerged in 2012 from a KickStarter fund out of California and quickly became the most talked-about VR tool of the century. For the first time, you could walk through a Tolkien-esque landscape or stand on Everest, becoming fully immersed in the rich sensory experience of these digital landscapes. The technology was so promising that Facebook quickly caught onto the trend and purchased Oculus for $2 billion.
One of the more compelling qualities of VR is the generation of visually-accurate landscapes in one continuous field, a technique that has long been pioneered by the film industry. The style attempts to mimic human attention by seamlessly allowing the viewer’s attention to shift to the secondary zones to the left and right of the viewer’s main field of view; it’s how we function in reality, by narrowing and widening our visual field according to central and peripheral sensory stimuli.
In VR, trying to achieve this wide field view has strengthened what we know about visual processing, particularly peripheral vision. Peripheral vision is actually weak in humans, especially at distinguishing color and shape, which is why peripheral objects appear as a blur of motion in our visual field. Perhaps this can be explained by the fact that peripheral vision actually goes through its own pathway in the brain, separate from the pathway used by central vision. In fact, these pathways work so differently that generating a visually convincing world in VR not only requires a wide field view, but the successful integration of both visual fields.
There remain several glitches in the VR system that make it imperfect in replicating reality, which may explain why VR fails to perfectly recapitulate the neural response to VR.
We might intuit that many of the areas of cortex that are activated during particular sensory events, as well as the emotions that accompany them—say, fear—must also activate under the effect of VR. But recent research suggests that this might not be the case.
Mayank Mehta, a professor at UCLA, writes, “the pattern of activity in a brain region involved in spatial learning in the virtual world is completely different than when it processes activity in the real world.”
Mehta’s research on rodents in real-world exploration versus virtual world exploration showed random firing of hippocampal neurons—normally responsible for building a cognitive map of space—as if the neurons could not identify where the rodent was, even though the rodent behaved normally as it would during real-world exploration. Furthermore, although the rodent hippocampal neurons were highly active in the real-world environment, more than half of those neurons were quiet in the virtual space.
Perhaps the difference in activity persists because there are still many ways that VR cannot fool the brain, which may have just as much to do with the brain’s perceptual limits as with the limitations of the technology.
One of the biggest problems with VR is latency—the tiny but perceptible delay between motion in VR and the change in image, creating a mismatch between movement and vision. In real life, the delay is perceptively zero. While VR can get the latency as low as 20 milliseconds, it will never be zero since it will always take time for a computer to register movement and then generate a new image.
Another problem with VR is that of vergence-accommodation: you can look at the far-off horizon of a virtual landscape, but your eyes will not be convinced. This is because in reality, objects that are close tend to be in focus, while those far away are not. However, in VR, the screen is always in focus, no matter the depth of the visual field, and this can create another disconnect the brain’s map of space.
We’re still a long way from a perfect VR, but this is more indicative of an incomplete picture of how the brain works, rather than a failure of the technology. Understanding why VR is imperfect at fooling the brain may lead to a better understanding of the neuroscience of sensory perception and integration. In this way, not only will VR become a more prominent platform for displaying elaborate sensory narratives, but these narratives and their virtual worlds, can teach us how the brain works to make sense of a sensory-rich world.
And, if nothing else, at least it’ll make some pretty awesome games.
Impaired spatial selectivity and intact phase precession in two-dimensional virtual reality. Mehta, M. Nature Neuroscience 18, 121-128 (2015).
This post is a submission to the 2nd Annual Lions Talk Science Blog Award! You can view other submissions from the 2015 contest here.
Lina Jamis is a 2nd year student in the Anatomy Graduate Program. She works in Dr. Christopher Yengo’s lab studying molecular motors and human deafness. Lina enjoys ultimate Frisbee, Crossfit, and general nerdiness.
2 thoughts on “The Immersive World of Virtual Reality: Why VR is the Ultimate Neuroscience Experiment”
Pingback: Congratulations to the Winners of the 2nd Annual Lions Talk Science Blog Award! | Lions Talk Science
Pingback: Meet a Scientist: Lina Jamis | Lions Talk Science