Sunday, July 19, 2015

Beyond gaming: virtual reality helps people with vision disabilities

Over the past two years, Sensics has been working with our customer Visionize and a group of researchers from the Wilmer Eye Institute at Johns Hopkins University on applying the group's combined expertise towards creating a solution to help people with vision disabilities. The Los Angeles Times published a story today about one of the models in this line. It's a good opportunity to describe this Low Vision project in greater detail and shine the light on non-gaming application of consumer VR.

Prototype of Visionize low-vision system is used to magnify the image of the boy above the monitor. The monitor - used for illustration purposes only - shows a screencast of what the user of the system sees in each eye
Low vision is a common problem. It is estimated that there are about 2.5 million people in the United States - so over 0.75% of the population - that suffer from low vision (defined as best corrected visual acuity less than 20/60 in the better seeing eye). While low vision is typically associated with aging, there are also a large number of kids who are born with vision disabilities or develop them in their early years. Additionally, hundreds of thousands additional patients enter the low-vision population every year.

The impact of low vision ranges from difficulty in reading to difficulties in recognizing people, places and objects. Disease progression can often be controlled, but the existing damage is permanent. Macular Degeneration, a disease that destroys the area of central vision (the fovea), is the most common low vision pathology. Because the resolution at the fovea is higher than the rest of the eye, the overall visual acuity is reduced.

Optical or digital magnifiers are popular with the low vision population and can be effective for static activities such as watching TV or reading. However, they are more challenging for use in dynamic activities such as walking:
A magnifier hides part of the text


  • They might be too large or cumbersome to hold
  • If they magnify the entire visual field, the user loses peripheral vision in a significant way. For instance, if the person using a 5x magnifier has a total field of vision of 100 degrees and if the magnifier covered their entire visual field, then only 20 degrees of the real world would be mapped into this 100 degrees, preventing effective movement.
  • If just part of the image is magnified, there is part of the scene that is completely hidden underneath the magnification, as is illustrated in the diagram on the right

To address this problem, we developed a non-linear magnification algorithm that magnifies the image at the point of interest but creates a continuous image so that nothing is lost at the edges. In the model covered in the LA Times, a Samsung Gear VR system is used. The on-board camera provides a live view of the environment and the customized algorithms perform real-time 60 FPS enhancement to present a smartly-magnified image to the user. Parameters such as size and amount of magnification of the "bubble" can be easily controlled. In some cases, these depend on the viewing conditions and in others they can be customized to the particular vision disability of the user.

More advanced models use different types of HMDs and have been tested with multiple cameras and other powreful additions. More about this as well as additional vision enhancements will perhaps be covered at a future opportunity.

An illustration of the magnified bubble can be seen in the diagram below:


and a video illustrating operating an ATM machine with the system (as well as other examples) can be seen in the Visionize site:


These days, gaming gets the majority of the press attention for virtual reality, but other applications exist. For us, the ability to work on a product that truly improves the quality of life for people with vision disabilities, is truly heart-warming.