Your ears may pinpoint the source of sound due to brain's neural code

Scientists have devised a new model to show how the human brain locates the source of sound, based on a more dynamic neural code

Hearing

How does your ear pick up the sources of sounds? The brain is able to gauge them by processing the difference in arrival time across the two ears, which is called the interaural time difference (ITD). Biomedical engineers have always held that humans tend to pinpoint sounds with a plan that is like a spatial map or compass.

Its neurons are in sync from left to right. Each neuron triggers off individually when they get fired off by sounds sourced from a given angle at 30 degrees left from the centre. Through new deductions, Antje Ihlefeld, director of NJIT's Neural Engineering for Speech and Hearing Laboratory, has come up with a novel model based on dynamic neural code.

According to her, it is a new discovery that gives a lot of hope to design hearing aids that become more specific. Currently, hearing aids are weak and poor in restoring sound direction. The article was published in the journal eLife.

"If there is a static map in the brain that degrades and can't be fixed, that presents a daunting hurdle. It means people likely can't "relearn" to localize sounds well. But if this perceptual capability is based on a dynamic neural code, it gives us more hope of retraining peoples' brains," Ihlefeld notes. "We would program hearing aids and cochlear implants not just to compensate for an individual's hearing loss, but also based upon how well that person could adapt to using cues from their devices. This is particularly important for situations with background sound, where no hearing device can currently restore the ability to single out the target sound. We know that providing cues to restore sound direction would really help."

The neural engineer collaborated with Robert Shapley, eminent neurophysiologist at NYU who stated that there is a peculiarity in human binocular depth perception or the human skill to estimate the distance of a visual object, which is also based on a computation comparing input that is received by both eyes. The distance estimates show lower accuracy among low-contrast stimuli or images that are tougher to distinguish from their surroundings than high-contrast images.

So, is this a neural principle that can also be applied to sound localization? Does it mean that it is less accurate for low-volume and softer noises compared to loud ones? However, that would be a significant move away from the Jeffress model, which is a spatial map theory.

It specifies that sounds of all volumes are processed and perceived in the same way. However, physiologists proposing that mammals are dependent on a dynamic neural model have not concurred with the idea. They are of the opinion that mammalian neurons are firing at different rates and are based on directional signals. The brain compares the rates across various sets of neurons and designs a map of the sound environment.

"The challenge in proving or disproving these theories is that we can't look directly at the neural code for these perceptions because the relevant neurons are located in the human brainstem, so we cannot obtain high-resolution images of them," she says. "But we had a hunch that the two models would give different sound location predictions at a very low volume."

The researchers found two studies on barn owls and rhesus macaques, which used neural tissue at these low sounds.

"We expected that for the barn owl data, it really should not matter how loud a source is -- the predicted sound direction should be really accurate no matter the sound volume -- and we were able to confirm that. However, what we found for the monkey data is that predicted sound direction depended on both ITD and volume," she said. "We then searched the human literature for studies on perceived sound direction as a function of ITD, which was also thought not to depend on volume, but surprisingly found no evidence to back up this long-held belief."

By roping in volunteers at the NJIT campus to test their hypothesis, the researchers used sounds that could test the effect of volume on the sources of sounds.

"We built an extremely quiet, sound-shielded room with specialized calibrated equipment that allowed us to present sounds with high precision to our volunteers and record where they perceived the sound to originate. And sure enough, people misidentified the softer sounds," notes Alamatsaz.

Their studies showed direct parallels in hearing and visual perceptions. Those were overlooked earlier, but now it is suggested that rate-based coding is an important basic operation. It computes spatial dimensions from sensory sources.

"Because our work discovers unifying principles across the two senses, we anticipate that interested audiences will include cognitive scientists, physiologists and computational modeling experts in both hearing and vision," Ihlefeld says. "It is fascinating to compare how the brain uses the information reaching our eyes and ears to make sense of the world around us and to discover that two seemingly unconnected perceptions -- vision and hearing -- may in fact be quite similar after all."

READ MORE