Sciweavers

52 search results - page 9 / 11
» The Role of Speech in Multimodal Human-Computer Interaction
Sort
View
CHI
2005
ACM
14 years 6 months ago
Conversing with the user based on eye-gaze patterns
Motivated by and grounded in observations of eye-gaze patterns in human-human dialogue, this study explores using eye-gaze patterns in managing human-computer dialogue. We develop...
Pernilla Qvarfordt, Shumin Zhai
CMC
1998
Springer
13 years 10 months ago
The IntelliMedia WorkBench - An Environment for Building Multimodal Systems
Abstract. Intelligent MultiMedia (IntelliMedia) focuses on the computer processing and understanding of signal and symbol input from at least speech, text and visual images in term...
Tom Brøndsted, Paul Dalsgaard, Lars Bo Lars...
HCI
2009
13 years 3 months ago
Did I Get It Right: Head Gestures Analysis for Human-Machine Interactions
This paper presents a system for another input modality in a multimodal human-machine interaction scenario. In addition to other common input modalities, e.g. speech, we extract he...
Jürgen Gast, Alexander Bannat, Tobias Rehrl, ...
HAPTICS
2007
IEEE
14 years 3 days ago
Role of vision on haptic length perception
When a human recognize length of an object while exploring it with their index finger, haptic and visual sensation both provide information for estimating the length of the object...
Akinori Kumazaki, Kazunori Terada, Akira Ito
HRI
2006
ACM
13 years 11 months ago
Working with robots and objects: revisiting deictic reference for achieving spatial common ground
Robust joint visual attention is necessary for achieving a common frame of reference between humans and robots interacting multimodally in order to work together on realworld spat...
Andrew G. Brooks, Cynthia Breazeal