Sciweavers

HICSS
2009
IEEE

Optimizing Visual Feature Perception for an Automatic Wearable Speech Supplement in Face-to-Face Communication and Classroom Sit

13 years 11 months ago
Optimizing Visual Feature Perception for an Automatic Wearable Speech Supplement in Face-to-Face Communication and Classroom Sit
Given the limitation of hearing and understanding speech for many individuals, we plan to supplement the sound of speech and speechreading with an additional informative visual input. Acoustic characteristics of the speech will be transformed into readily perceivable visual characteristics. The goal is to design a device seamlessly worn by the listener, which will perform continuous real-time acoustic analysis of his or her interlocutor’s speech. This device would transform several continuous acoustic features of the talker’s speech into continuous visual features, which will be simultaneously displayed on the speechreader’s eyeglasses. The current research evaluates how easily a number of different visual configurations are learned and perceived. The goal is to optimize the visual feature presentation and implement it in the wearable computer system.
Dominic W. Massaro, Miguel Á. Carreira-Perp
Added 19 May 2010
Updated 19 May 2010
Type Conference
Year 2009
Where HICSS
Authors Dominic W. Massaro, Miguel Á. Carreira-Perpiñán, David J. Merrill
Comments (0)