Sciweavers

262 search results - page 28 / 53
» The Integrality of Speech in Multimodal Interfaces
Sort
View
GW
2005
Springer
173views Biometrics» more  GW 2005»
15 years 7 months ago
Deixis: How to Determine Demonstrated Objects Using a Pointing Cone
Abstract. We present an collaborative approach towards a detailed understanding of the usage of pointing gestures accompanying referring expressions. This effort is undertaken in t...
Alfred Kranstedt, Andy Lücking, Thies Pfeiffe...
CHI
2000
ACM
15 years 6 months ago
Providing integrated toolkit-level support for ambiguity in recognition-based interfaces
Interfaces based on recognition technologies are used extensively in both the commercial and research worlds. But recognizers are still error-prone, and this results in human perf...
Jennifer Mankoff, Scott E. Hudson, Gregory D. Abow...
HCI
2007
15 years 3 months ago
Multimodal Augmented Reality in Medicine
The driving force of our current research is the development of medical training systems using augmented reality techniques. To provide multimodal feedback for the simulation, hapt...
Matthias Harders, Gérald Bianchi, Benjamin ...
NAACL
1994
15 years 3 months ago
Advanced Human-Computer Interface and Voice Processing Applications in Space
Much interest already exists in the electronics research community for developing and integrating speech technology to a variety of applications, ranging from voice-activated syst...
Julie Payette
COLING
2010
14 years 9 months ago
Latent Mixture of Discriminative Experts for Multimodal Prediction Modeling
During face-to-face conversation, people naturally integrate speech, gestures and higher level language interpretations to predict the right time to start talking or to give backc...
Derya Ozkan, Kenji Sagae, Louis-Philippe Morency