Annoyed by cell phone users who believe they need to yell into their mobiles in order to be heard? Such loud and obnoxious talkers may become extinct, thanks in part to the military's need for sure -- and sometimes silent -- communications.
For years, scientists at the U.S. military's Defense Advanced Research Projects Agency, have been investigating ways to improve radio communications among troopers in noisy environments such as inside a rumbling tank or clattering helicopter.
Under its Advanced Speech Encoding project, DARPA hopes the answer may lie in refinement of so-called "non-acoustic sensors," experimental devices that can pick up a person's voice without a single syllable shouted, spoken or otherwise uttered.
One such device, the Tuned Electromagnetic Resonance Collar or TERC, being developed by a team of researchers at Worcester Polytechnic Institute in Massachusetts uses a unique approach to creating speech from an unspoken voice.
A Quest for Quiet Talk
TERC is a plastic strip embedded with thin copper foil and other small electronic components. When strapped around a person's neck, the collar acts as a big capacitor -- an electronic component that can be charged to hold a small amount of electricity.
As a person speaks, the tiny movement of the vocal chords changes the collar's capacitance. Microchips can measure and process these shifting electronic signals and turn them into synthesized human speech using computers equipped with specially crafted software.
The main advantage of non-acoustical sensors such as TERC is that they pick up only the sound of only the speaker wearing the device. Conventional microphones, by contrast, pick up the speaker's voice as well as every other sound within its range.
It's a novel approach to battling background noise, says Donald R. Brown, a principal investigator with the research team developing the sensor at WPI. But human capacitance is a technology that is well understood and already employed in other devices.
"If you own an iPod, the front switches work on capacitance technology," says Brown. "Drag your finger across it and it changes the capacitance and [the iPod] performs an action." Capacitance technology is also used in some laptop "touchpads" to control the on-screen cursor.
Sticky Points for a Collar
But WPI researchers admit that there are limitations to TERC that still need to be ironed out.
For one, the capacitance technology doesn't yet work across the entire range of dynamic human speech. Researchers note that TERC can't pick up fricatives (the "s" sound in "English," for example) and plosives -- the "p" sound in "pit."
Another harsh reality: While TERC can measure "glottal activity" for signs of a person speaking, it still takes a lot of bulky computers to translate those digital signals into recognizable speech. And what a receiver actually "hears" from a TERC-based transmission will be a computer-generated voice since the non-acoustic sensors don't capture the actual sound of the person speaking.
WPI's Brown says that while the initial research work on TERC is promising, DARPA funding for further study has lapsed. And the initial team of professors and graduate students researching the project has moved on to other areas. But he's hoping that both can be corrected this year.
One thing that could work in WPI's favor is Brown's belief that the TERC technology can be used for other purposes besides clear communications.
"We could look at how non-acoustic sensors such as TERC could fit in clinical applications -- say, detecting problems with vocal cords," says Brown. TERC might even be able to tell if someone has had a little too much to drink.
"When you're intoxicated, your vocal chords become heavier. So maybe we can detect intoxication, strictly by chord movement," says Brown. But, "we don't have any studies in those areas yet."
Steps Toward Silent Speakers
Whether Brown can spur further funding and research for TERC remains to be seen. But the promise of better, less stressful mobile communications isn't just a pipe dream.
According to Jan Walker, a public information officer for DARPA, the defense agency is in the process of outlining the second phase of its Advanced Speech Encoded project. Some of the more promising prospects include technology developed in NASA's Ames Laboratory which uses electrodes that may be able to detect sub-vocal or perhaps even completely silent speech.
And even DARPA's early research efforts are already proving useful.
AliphCom, a technology company in Brisbane, Calif., recently began selling a high-tech, noise-canceling headset for consumer cell phones based on technology developed years ago for the initial phase of DARPA's project.
Its $150 Jawbone unit uses both a conventional microphone and a sensor that picks up vibrations from a person's jaw -- hence the name. Tiny digital signal processors in the device are able to compare the electric signals generated by both the microphone and the vibration sensor to quickly determine speech from background "noise." The chips generate a signal that's completely opposite of the noise, effectively eliminating it from the spoken voice portion.
Company co-founder Hosain Rahman said Jawbone has been "incredibly well-received in the consumer press" since it became commercially available last year. What's more, "It is being used for further DARPA development," said Rahman. (DARPA's Walker confirms that AliphCom is one of the participants in the second phase of its research project but did not elaborate further.)
Still, the Jawbone technology isn't currently available for every model of consumer cell phone. And since Jawbone is an "active noise cancellation" headset, the technology does sap some power from the cell phone. The company estimates users can expect a 15 percent to 25 percent reduction in a cell phone's battery life but the headset has a switch to turn off the noise-canceling feature when not needed.
Such limitations may mean many of us will still have to endure the clamor of vociferous cell phone users for some time.