It's a familiar scene: you're at a crowded party, but despite all the noise, you're still able to make out the words of the one person with whom you're talking.
Researchers from the University of California, San Francisco studied this phenomenon, known as the " cocktail party effect," and found it has both psychological and neurological components. Their data are published in the latest issue of the journal Nature.
What happens from a neurological standpoint is that the sounds all enter the ear as one cacophonous roar, but the brain processes all the information and tunes into one sound, such as a person's voice, and filters out the rest.
"The psychological component is that it's a sound we want or need to hear, which is why we can tune into it," said co-author Dr. Edward Chang, an assistant professor of neuroscience.
Chang and postdoctoral scholar Rima Mesgarani looked at this effect in three subjects who were undergoing treatment for epilepsy. All three had normal hearing and were able to process speech normally. The authors hooked the subjects up to electrodes and asked them to listen to two speakers, but only focus on one.
By measuring brain activity, they were able to determine what the subjects heard, and found that their brains responded to the targeted speaker.
This line of research is important, Chang said, because it can contribute to the understanding of how language processing is impaired in people with attention deficit disorder, autism, disorders involving the learning of language and deficits that occur as people get older.
"People with these disorders have problems with the ability to focus on a certain aspect of the environment," Chang said. "They can't always hear things correctly."
Understanding how the brain processes the human voice can also contribute to the development of new technologies that rely on voice recognition, the authors said in a university press release. Such technology that exists now, such as Apple's Siri, may be advanced, but the human voice is much more complex.
"Following one speaker in the presence of another can be trivial for a normal human listener, but remains a major challenge for state-of-the-art automatic voice recognition algorithms," the authors wrote.