Helping the Deaf Hear Music

John Redden is a deaf professional musician. He can sing on key, harmonize on key, and hear musical intervals well enough to reproduce them. He does this with a cochlear implant, which is a computer chip surgically embedded in his skull. The chip drives 16 tiny electrodes threaded into his inner ear that stimulate his auditory nerves. It gets auditory data from an external computer sitting on his ear that looks like a hearing aid. Instead of amplifying sound, though, it digitizes it and sends it to the implant by radio through the skin.

The technology is a marvel, but people like Redden are a mystery. The software is designed for speech, so it only "listens" to the speech frequencies rather than the much wider range occupied by music. The device delivers the overall shape of sound rather than the detailed frequency information that is crucial to distinguishing one pitch from another.

Most people with normal hearing can tell the difference between pitches that are 1.1 semitones apart. (A semitone is the smallest pitch interval in Western music.) But a 2002 study at the University of Iowa found that most implant users can only distinguish pitches when they are at least 7.6 semitones apart.

Some progress has been made in writing better software for music. I'm a cochlear-implant user myself. In 2005, I tried new software, called Fidelity 120, that simulated seven virtual electrodes between each physical electrode, not unlike the way that an audio engineer can make a sound seem to come from between two speakers. By targeting nerve populations between each electrode, the software gave me better frequency resolution. For me, it made a big difference. When I play this simulation of a piano with my old software, called Hi-Res, I can't tell any three adjacent keys apart. But with Fidelity 120, I can. Music sounds fuller, richer, and more detailed.

But not everyone gets the same result. Redden, who does far better musically than I do, tried Fidelity 120 but still prefers Hi-Res. Such variations between user experiences present real perplexities for researchers who want to develop better software. The experience of music is inevitably subjective. A Sex Pistols fan might tell you that a given piece of software lets her hear "Anarchy in the U.K." better, while a Mozart fan might tell you that the software doesn't do anything for Eine Kleine Nachtmusik. Subjective reports don't give developers enough information to know whether they're making progress.

I asked Jay Rubinstein, an otolaryngologist and cochlear-implant researcher at the University of Washington, to explain the problem. "Music is not just one entity," he told me. "It consists of combinations of rhythm, melody, harmony, dissonance, and lyrics. One needs to break it down into its component parts in order to determine how well or how poorly someone can hear it."

Rubinstein and his team of researchers at the University of Iowa and the University of Washington are doing just that. At the Association for Research in Otolaryngology's annual meeting in Phoenix on February 17, they unveiled a computerized test called the Clinical Assessment of Music Perception (CAMP). A paper outlining their work has just been published in Otology and Neurotology's February issue.

CAMP sidesteps differences in taste by stripping down music to three basic components--pitch, timbre, and melody--and systematically assessing how well users perceive each.

Page
  • 1
  • |
  • 2
  • |
  • 3
Join the Discussion
You are using an outdated version of Internet Explorer. Please click here to upgrade your browser in order to comment.
blog comments powered by Disqus
 
You Might Also Like...