Tim Sandle wearing headphones. Image by Tim Sandle.
In a noisy environment, such as at work or when travelling, many people resort to wearing noise cancelling headphones to reduce the level of background noise and to achieve a level of concentration or rest. Yet on many occasions, we want to hear what is close to us but not the clatter of sounds that surround us.
Take the office, for example, if we’re working on a complex spreadsheet in an openplan office we’ll need to focus and filtering out the water cooler gossip is a great benefit. Yet if a colleague comes close and wants to ask a question, the headphones need to be taken off.
But what if you could keep headphones on, continue to filter out the annoying chatter, but hear someone at close range – within your personal bubble?
This is what scientists from the University of Washington have achieved. The researchers have developed a headphone prototype that allows listeners to hear people speaking within a bubble with a programmable radius of 3 to 6 feet.
Voices and sounds outside the bubble are quieted by an average of 49 decibels (approximately the difference between a vacuum and rustling leaves), even if they are louder than those in the bubble.
According to lead researcher Shyam Gollakota: “Humans aren’t great at perceiving distances through sound, particularly when there are multiple sound sources around them. Our abilities to focus on the people in our vicinity can be limited in places like loud restaurants, so creating sound bubbles on a hearable has not been possible so far.”
Central to the new headphones are six small microphones located across the headband which feed into a neural network (contained within an onboard embedded computer attached to the headphones). The neural network tracks when different sounds reach each microphone. The system then suppresses the sounds coming from outside the bubble, while playing back and slightly amplifying the sounds inside the bubble.
Addressing the new technology, Gollakota explains: “Our AI system can actually learn the distance for each sound source in a room, and process this in real time, within 8 milliseconds, on the hearing device itself.”
To train the system to create sound bubbles in different environments, researchers needed a distancebased sound dataset collected in the realworld, which was not available. To gather such a dataset, they put the headphones on a mannequin head. A robotic platform rotated the head while a moving speaker played noises coming from different distances. The team collected data with the mannequin system as well as with human users in 22 different indoor environments, including offices and living spaces.
The AI algorithm compares the phases of each of these frequencies to determine the distance of any sound source (a person talking, as an example).
For the next phase, the researchers are creating a startup to commercialize this technology.
The research appears in the journal Nature Electronics, titled “Hearable devices with sound bubbles”.