Helping the Deaf Hear Music
New test measures music perception in cochlear-implant users.
Feb. 26, 2008 — -- John Redden is a deaf professional musician. He can sing on key, harmonize on key, and hear musical intervals well enough to reproduce them. He does this with a cochlear implant, which is a computer chip surgically embedded in his skull. The chip drives 16 tiny electrodes threaded into his inner ear that stimulate his auditory nerves. It gets auditory data from an external computer sitting on his ear that looks like a hearing aid. Instead of amplifying sound, though, it digitizes it and sends it to the implant by radio through the skin.
The technology is a marvel, but people like Redden are a mystery. The software is designed for speech, so it only "listens" to the speech frequencies rather than the much wider range occupied by music. The device delivers the overall shape of sound rather than the detailed frequency information that is crucial to distinguishing one pitch from another.
Most people with normal hearing can tell the difference between pitches that are 1.1 semitones apart. (A semitone is the smallest pitch interval in Western music.) But a 2002 study at the University of Iowa found that most implant users can only distinguish pitches when they are at least 7.6 semitones apart.
Some progress has been made in writing better software for music. I'm a cochlear-implant user myself. In 2005, I tried new software, called Fidelity 120, that simulated seven virtual electrodes between each physical electrode, not unlike the way that an audio engineer can make a sound seem to come from between two speakers. By targeting nerve populations between each electrode, the software gave me better frequency resolution. For me, it made a big difference. When I play this simulation of a piano with my old software, called Hi-Res, I can't tell any three adjacent keys apart. But with Fidelity 120, I can. Music sounds fuller, richer, and more detailed.
But not everyone gets the same result. Redden, who does far better musically than I do, tried Fidelity 120 but still prefers Hi-Res. Such variations between user experiences present real perplexities for researchers who want to develop better software. The experience of music is inevitably subjective. A Sex Pistols fan might tell you that a given piece of software lets her hear "Anarchy in the U.K." better, while a Mozart fan might tell you that the software doesn't do anything for Eine Kleine Nachtmusik. Subjective reports don't give developers enough information to know whether they're making progress.