A Cornell University researcher has developed sonar glasses that “hear” you without speaking. The eyeglass attachment uses tiny microphones and speakers to read the words you mouth as you silently command it to pause or skip a music track, enter a passcode without touching your phone or work on CAD models without a keyboard.
Cornell Ph.D. student Ruidong Zhang developed the system, which builds off a similar project the team created using a wireless earbud — and models before that which relied on cameras. The glasses form factor removes the need to face a camera or put something in your ear. “Most technology in silent-speech recognition is limited to a select set of predetermined commands and requires the user to face or wear a camera, which is neither practical nor feasible,” said Cheng Zhang, Cornell assistant professor of information science. “We’re moving sonar onto the body.”
The researchers say the system only requires a few minutes of training data (for example, reading a series of numbers) to learn a user’s speech patterns. Then, once it’s ready to work, it sends and receives sound waves across your face, sensing mouth movements while using a deep learning algorithm to analyze echo profiles in real time “with about 95 percent accuracy.”
The system does this while offloading data processing (wirelessly) to your smartphone, allowing the accessory to remain small and unobtrusive. The current version offers around 10 hours of battery life for acoustic sensing. Additionally, no data leaves your phone, eliminating privacy concerns. “We’re very excited about this system because it really pushes the field forward on performance and privacy,” said Cheng Zhang. “It’s small, low-power and privacy-sensitive, which are all important features for deploying new, wearable technologies in the real world.”
Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft