JAKARTA (IndoTelko) - Augmented Reality focuses on designing meaningful experiences that leverage technology to extend our human capacity, including bringing sight to those who can’t see.
Imagine your clothing being able to sense the world around you, or your computer responding to your voice and hand commands, or a pair of eyeglasses learning your idiosyncrasies. Now imagine what this would mean if you were blind.
Many new tools enable easy ways to augment reality — to make your life a little easier. Word Lens, which was recently acquired by Google Translate, can translate signs or menus into any language. Sky Map, another app, helps identify stars and planets in the night sky. In London, an interactive AR guide helps visitors delve deeper into museum exhibits.
Rajiv Mongia, director of the Intel RealSense Interaction Design Group, is making how AR can extend human perception and better engage with the physical environment around us into a reality. Mongia and his team developed a prototype that has the potential to help blind and vision-impaired people gain a better sense of their surroundings.
The system, using RealSense 3D camera technology and vibrating sensors integrated into clothing. The prototype currently works by seeing depth information to sense the environment around the user. Feedback is sent to the wearer through haptic technology that uses vibration motors placed on the body for tactile feedback.
Mongia compares it to the vibration mode on your phone. “Now the intensity of that touch is proportional to how close that object is to you,” he said. “So if it’s very close to you, the vibration is stronger. If it’s further away from you, it’s lower.”
Darryl Adams, a technical project manager at Intel who was diagnosed with Retinitis Pigmentosa 30 years ago, has been testing the wearable system. Adams says the technology allows him to make the most of the vision he does have by augmenting his peripheral vision with the sensation of touch.
“For me, there is tremendous value in the ability to recognize when change occurs in my periphery,” Adams said. “If I am standing still and I feel a vibration, I am instantly able to turn in the general direction to see what has changed. This would typically be somebody approaching me, so in this case I can greet them, or at least acknowledge they are there.
“Without the technology, I typically miss this type of change in my social space so it can often be a bit awkward,” he said.
Mongia said the team is exploring making things in a way that is not necessarily one size fits all. The system was tested on three wearers, each with very different needs and levels of vision — from low vision to fully blind. “I think it’s going to be something that needs to in a sense either adapt to you or be customizable for each individual to basically meet their particular needs,” Mongia said.
OrCam is another device designed for the visually impaired. It uses “machine learning,” a form of Artificial Intelligence, to help users interpret and better interact with their physical surroundings. The device can read text and recognize things like products, paper currency and traffic lights.
OrCam attaches to the side of any pair of glasses. The front contains a camera that continuously scans the user’s field of view. The back of the camera contains a bone conduction speaker to transmit sound to the wearer without blocking the ear. The camera is connected via a thin cable to a small processing unit that sits in the user’s pocket.
With OrCam, the user shows the device what they are interested in by pointing at it. “Point at a book, the device will read it,” said Yonatan Wexler, head of Research and Development at OrCam. “Move your finger along a phone bill, and the device will read the lines letting you figure out who it is from and the amount due.”
Wexler says there is no need to point when identifying people and faces. “The device will tell you when your friend is approaching you. It takes about ten seconds to teach the device to recognize a person,” he said. “All it takes is having that person look at you and then stating their name.”
Wexler said that in order to teach the system to read, it is repeatedly shown millions of examples, so the algorithms focus on relevant and reliable patterns.
In this seductive technological era of overlays and virtual immersion, how do we ensure that emerging technologies like AR do not replace our ‘humanness’?
“I think the thing that we always need to keep in mind is that it’s not about replacing what a human does,” said Intel’s Mongia. “What we should really be focused on is what the human perhaps doesn’t do well.”(es)