Aug 2021 - Nov 2021
HoloLens 2, Unity 3D, C#, Figma, Adobe Illustrator
Cooperated with Jialing Li, Ming Chen on user research, ideation.
Our team finished user research and the first draft of Findit in Aug 2020. Since then, this project has been updated by myself. As the key contributor, my duties lie in all the processes.
People with moderate to severe visual impairment (MSVI) account for the largest proportion of the visually impaired group. Although they have light perception, the problem of limited vision could still make their everyday task difficult, because it always requires extra time and energy for them to access something nearby.
In this project “Findit”, we designed a mixed-reality-driven solution for HoloLens together with an App on mobile devices to help user’s find daily items quicker and easier. In brief, the dual combines mixed reality and machine learnning to provide location-data-based guide for users, and collected personalized information such as voice tags for possible extension. When needed, the App will also help users to contact family members and friends for extra assistance.
Feasting on the pre-recorded information and images of daily items, the HoloLens could lead people to certain items placed in familiar environments. And when combined with mixed reality and machine learning, the headset could quickly adapt to an unfamiliar space as well.
In practice, when putting on the HoloLens, a wanted item will become a big shiny spotlight that ‘stands out’ in the user’s visual field. In some cases, personalized voice messages tagged to the item would also be heard to enhance user’s relevant memories, such as purchase date, medication directions, or simply the giver's name of a present.
There are five main set-in functions: Identify items, Seek help from friends, Locate items, Make voice tags and Read voice tags. By combining Mixed Reality and machine learning, people with MSVI can find and identify items quicker and easier in both unfamiliar environments and familiar environments.
Identify items
Seek help from families&friends
Make voice tags
Locate items
Families and friends can offer instant remote assistance to people with MSVI by answering video calls or responding to text messages within this App. In addition, they can add voice tags to new items they give to the user; or maintain the user’s database by downloading new information on daily items from the online knowledge repository in future.
Visual impairment:
Visual impairment is a condition caused by eye disease. The visual acuity of an indivdual with visual impairment is around 20/70 or poorer in the better-seeing eye. The impairment cannot be corrected or improved with regular eyeglasses.
MSVI:
According to the 11th Revision of the International Classification of Diseases (05/2021), if the patient’s vision in the better eye with the best possible glasses correction is between 20/70 to 20/400, they would be considered having moderate to severe visual impairment.
Light Perception: Able to perceive the difference between light and dark, or daylight and nighttime. As people with MSVI having light perception can determine the general source and direction of light, they are considered the largest group with the highest possibility to achieve a better life.
Despite facing severe challenges in many aspects of their lives, people with MSVI are normally marginalized by social infrastructure and technological advancement.
At first, "mixed reality" technology integrating virtual information into the physical environment seems a chance for them to enhance vision. However, it soon leads to another question of equal accessibilty, since the virtual world is still dominated by the visual.
On aware of that, research organizations, such as the School of Information Sciences at Cornell University, started multiple programs on improving the visual recognition ability of people with MSVI via artificial intelligence and augmented reality, trying to empower them in both the real and virtual worlds. For example, in an AI-based enhanced perception (EP) system, information around users is retrieved, communicated and enhanced before presenting on the AR interfaces to the users at different perception levels.
These studies have yielded important results on developing functions that can support people with MSVI to perform everyday tasks, such as shopping and navigation.
Purpose - Understanding how people with MSVI deal with everyday tasks
Methods - Observation / Focus group
Subjects for observation - Chunlan SHEN
Focus group - Chunlan SHEN & her friends (both MSVI people)
We found Chunlan, a typical MSVI patient who works as a masseur, via the visually impaired community online. She invited us to observe her life at home.
We first noted down her daily tasks; rated the difficulty, frequency of each; and asked about her life habits. Given that among all tasks, to ‘find and distiguish items’ is the one troubled her most frequently every day (and it often affects other activities directly), we nailed the subject down.
Purpose - Understanding difficulties about finding and identifying items
Methods - Interview
Subjects for the interview - 4 people with MSVI, 2 friends/relatives (Normal vision)
Due to the diversity of visually impaired people’s eyesight, and the situation that MSVI people often seek assistance from others, we interviewed not only people with MSVI but also their friends and relatives.
Key findings:
1. Though people with MSVI have light perception, it could not help them much when they want to find items.
2. This group of people has to store items in specific locations and keep them arranged and tidy, which are already big challenges for them.
3. They cannot distinguish items in similar shapes. Sometimes they would reach the item, but do not know how to use it.
4. Some people with MSVI have relatively high self-esteem and a desire to live independently. Their family members and friends do want to help them, but it’s hard to provide them with just-right help.
Based on user research, we developed an ecosystem for people with MSVI and their families&friends. This ecosystem contains two applications, one is a HoloLens application for people with low vision to identify items. The other one is a mobile application for their friends and relatives to help them remotely.
1. Use technology like MR & machine learning to empower MSVI people
With the combination of mixed reality and machine learning, people with MSVI can detect items in both familiar and unfamiliar environments by using the application on HoloLens, because the item’s location will be converted into light spot signals that they can easily perceive.
2. Use voice tags to store users’ personal information about the item
Moreover, they can add personalized voice tags to each item to help them remember more information, such as purchase dates, instructions, and the giver's name.
3. Families&friends provide people with MSVI with remote assistance
People close to users can offer remote assistance through the mobile application, and add voice tags to presents they give.
Since our goal is to find an appropriate light spot solution in HoloLens, I tested MSVI people’s perception and recognition of different kinds of light spots varying in flashing speed, size, and brightness before prototyping.
Test Light Spot Flashing Speed
Flashing light spots are easier to identify, but too fast flashing speeds will cause discomfort to the user's eyes.
One static light spot and three with different flashing speeds.
Test light spot size
All sizes of spots are easy to identify. Thus different sizes could be adjusted to reflect the distance from the object to the user.
Four light spots from small to middle sizes.
Test light spot brightness
Though Hololens has 10 preset brightness levels. When testing, our subject still found the lowest level a bit too bright. Therefore the device needs to renew its brightness setting range to match users who are sensitive to light.
Ten light spots from dark to bright.
Key findings:
1. Using light spots as the key pointer is feasible because they are easy to identify for MSVI users.
2. Flashing speed and brightness of the light spot need to be controlled.
Light spots & Object recognition simulation
Prototyping: I use Vuforia engine to complete the item recognition part, and Unity to achieve the presentation of light spots. Mixed Reality Toolkit is also adopted to generate the application and publish it to HoloLens.
Light spots & Dialogue flow testing
Subject wearing HoloLens experienced how light spots could help them identify and position items. During this process, I read the voice prompts for the subject to check whether those information were helpful.
Limitations
Due to hardware errors, light spots would sometimes show discoloration, truncation, and delay, which might affect clear perception for users.
Results and iterations
1. Using light spots to locate items is much more convenient than touch.
2. Given that it is difficult for people with low vision to capture the light spot when the item is very close to the user’s face (because the HoloLens camera rests on the forehead of the wearer), more prompts are needed to ensure that users could put the item in place.
3. Because of the limitation of Hardware, the light spot sometimes shows discoloration, truncation, and delay.
4. The field of view of the glasses is too small, which affects the efficiency of finding items.
1. Take advantage of limited vision of people with MSVI to help them find items.
2. Use voice tags and item libraries to meet personalized needs of people with MSVI.
3. Overcome space constraints and using APP to enable family and friends to remotely assist the user.
User control:
1. People with MSVI have difficulties in making voice tags themselves.
2. Because the camera of HoloLens rests on user’s forehead, it is difficult to aim at objects and capture complete image in a natural fashion.
Hardware:
Due to device limitations, the light spot sometimes shows discoloration, truncation, and delay, which might affect the user experience.
Other factors:
1. The device cannot recognize objects in too bright or too dark environments.T
2. he device cannot identify occluded objects
References:
https://www.afb.org/blindness-and-low-vision/eye-conditions/low-vision-and-legal-blindness-terms-and-descriptions#VisualImpairment
https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment
https://www.lasereyesurgeryhub.co.uk/data/visual-impairment-blindness-data-statistics/
https://www.nei.nih.gov/learn-about-eye-health/eye-conditions-and-diseases/low-vision
https://chicagolighthouse.org/types-of-low-vision/
Zhao, Yuhang, et al. "Enabling people with visual impairments to navigate virtual reality with a haptic and auditory cane simulation." Proceedings of the 2018 CHI conference on human factors in computing systems. 2018.
Zhao, Yuhang, et al. "Designing AR visualizations to facilitate stair navigation for people with low vision." Proceedings of the 32nd annual ACM symposium on user interface software and technology. 2019.
Zhao, Yuhang, et al. "The effectiveness of visual and audio wayfinding guidance on smartglasses for people with low vision." Proceedings of the 2020 CHI conference on human factors in computing systems. 2020.
Zhao, Yuhang, et al. "CueSee: exploring visual cues for people with low vision to facilitate a visual search task." Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 2016.
https://arpost.co/2020/01/27/study-givevision-vr-device-helps-visually-impaired-recover-eyesight/
https://www.microsoft.com/en-us/ai/seeing-ai
https://techcrunch.com/2017/02/16/oxsight-uses-augmented-reality-to-aide-the-visually-impaired/
01
02
03
04
05
06
07
08
09