Sciweavers

ICASSP
2010
IEEE

Synthesizing speech from Doppler signals

13 years 4 months ago
Synthesizing speech from Doppler signals
It has long been considered a desirable goal to be able to construct an intelligible speech signal merely by observing the talker in the act of speaking. Past methods at performing this have been based on camera-based observations of the talker’s face, combined with statistical methods that infer the speech signal from the facial motion captured by the camera. Other methods have included synthesis of speech from measurements taken by electro-myelo graphs and other devices that are tethered to the talker – an undesirable setup. In this paper we present a new device for synthesizing speech from characterizations of facial motion associated with speech – a Doppler sonar. Facial movement is characterized through Doppler frequency shifts in a tone that is incident on the talker’s face. These frequency shifts are used to infer the underlying speech signal. The setup is farfield and untethered, with the sonar acting from the distance of a regular desktop microphone. Preliminary expe...
Arthur R. Toth, Kaustubh Kalgaonkar, Bhiksha Raj,
Added 06 Dec 2010
Updated 06 Dec 2010
Type Conference
Year 2010
Where ICASSP
Authors Arthur R. Toth, Kaustubh Kalgaonkar, Bhiksha Raj, Tony Ezzat
Comments (0)