Sciweavers

231 search results - page 12 / 47
» Recognition of Gestures in the Context of Speech
Sort
View
PDC
2006
ACM
15 years 3 months ago
A participatory design agenda for ubiquitous computing and multimodal interaction: a case study of dental practice
This paper reflects upon our attempts to bring a participatory design approach to design research into interfaces that better support dental practice. The project brought together...
Tim Cederman-Haysom, Margot Brereton
IUI
2000
ACM
15 years 2 months ago
Expression constraints in multimodal human-computer interaction
Thanks to recent scientific advances, it is now possible to design multimodal interfaces allowing the use of speech and pointing out gestures on a touchscreen. However, present sp...
Sandrine Robbe-Reiter, Noelle Carbonell, Pierre Da...
ICMI
2005
Springer
136views Biometrics» more  ICMI 2005»
15 years 3 months ago
Contextual recognition of head gestures
Head pose and gesture offer several key conversational grounding cues and are used extensively in face-to-face interaction among people. We investigate how dialog context from an ...
Louis-Philippe Morency, Candace L. Sidner, Christo...
MHCI
2009
Springer
15 years 4 months ago
Contextual push-to-talk: a new technique for reducing voice dialog duration
We present a technique in which physical controls have both normal and voice-enabled activation styles. In the case of the latter, knowledge of which physical control was activate...
Garrett Weinberg
TASLP
2002
111views more  TASLP 2002»
14 years 9 months ago
Speech enhancement using a mixture-maximum model
We present a spectral domain, speech enhancement algorithm. The new algorithm is based on a mixture model for the short time spectrum of the clean speech signal, and on a maximum a...
David Burshtein, Sharon Gannot