Sciweavers

478 search results - page 46 / 96
» Usability of user interfaces: from monomodal to multimodal
Sort
View
ICMI
2010
Springer
217views Biometrics» more  ICMI 2010»
14 years 9 months ago
Focusing computational visual attention in multi-modal human-robot interaction
Identifying verbally and non-verbally referred-to objects is an important aspect of human-robot interaction. Most importantly, it is essential to achieve a joint focus of attentio...
Boris Schauerte, Gernot A. Fink
ICMI
2004
Springer
116views Biometrics» more  ICMI 2004»
15 years 5 months ago
Towards integrated microplanning of language and iconic gesture for multimodal output
When talking about spatial domains, humans frequently accompany their explanations with iconic gestures to depict what they are referring to. For example, when giving directions, ...
Stefan Kopp, Paul Tepper, Justine Cassell
CSCW
2004
ACM
15 years 5 months ago
Instant messages: a framework for reading between the lines
A framework is described for analyzing keystroke level data from instant messages (IM). This is unlike other analyses of IM which employ server-based logs of messages. This framew...
Jeffrey D. Campbell
GW
1999
Springer
157views Biometrics» more  GW 1999»
15 years 4 months ago
The Ecological Approach to Multimodal System Design
Following the ecological approach to visual perception, this paper presents a framework that emphasizes the role of vision on referring actions. In particular, affordances are util...
Antonella De Angeli, Frederic Wolff, Laurent Romar...
HCI
2009
14 years 9 months ago
Guiding a Driver's Visual Attention Using Graphical and Auditory Animations
This contribution presents our work towards a system that autonomously guides the user's visual attention on important information (e.g., traffic situation or in-car system st...
Tony Poitschke, Florian Laquai, Gerhard Rigoll