Recently, scientists have started to explore the possibilities of using augmented reality to help blind people navigate. Caltech scientists have come up with an algorithm that helps them navigate new and unfamiliar locations. The new computer vision algorithm works on the idea of real-world objects announcing themselves to the people upon entering a room.
At the core of this program is the unique software CARA – Cognitive Augmented Reality Assistant. Caltech scientists have used CARA to give voice to all objects by converting information from the surroundings into audio messages. Users will be able to hear that voice while wearing the HoloLens, which is the headset used by Caltech to create a digital mesh over real-world scenarios.
Objects are detected by HoloLens and spatialized sounds are used to inform you about the objects as they call out to you from the direction where they’re placed. However, the continuous bombardment of information can be too overwhelming for some. To stop the information overload, Caltech team has introduced different modes to simplify the experience.
“All the technology of VR and AR is about acquiring the information from the scene and then converting it into to other uses.”
~ Markus Meister, Professor at Caltech
Technology such as this would definitely bring forth endless possibilities for people who are unable to use their sight in one way or another. Something as simple as entering a room or stepping up a stair can be insignificant for us. But for blind people, having the independence to do these actions all by themselves can be of great importance.
Early testing of Caltech’s program has shown a lot of promise. However, they have their work cut out for them. One of the important things to explore is to see how efficiently CARA and HoloLens can work in larger public areas, such as amusement parks, stores, and shopping malls.