Sciweavers

16 search results - page 2 / 4
» Multimodal inference for driver-vehicle interaction
Sort
View
HRI
2010
ACM
13 years 11 months ago
Investigating multimodal real-time patterns of joint attention in an hri word learning task
Abstract—Joint attention – the idea that humans make inferences from observable behaviors of other humans by attending to the objects and events that these others humans attend...
Chen Yu, Matthias Scheutz, Paul W. Schermerhorn
IUI
2003
ACM
13 years 10 months ago
Affective multi-modal interfaces: the case of McGurk effect
This study is motivated by the increased need to understand human response to video-links, 3G telephony and avatars. We focus on response of participants to audiovisual presentati...
Azra N. Ali, Philip H. Marsden
CVPR
2000
IEEE
14 years 7 months ago
Multimodal Speaker Detection Using Error Feedback Dynamic Bayesian Networks
Design and development of novel human-computer interfaces poses a challenging problem: actions and intentions of users have to be inferred from sequences of noisy and ambiguous mu...
Vladimir Pavlovic, James M. Rehg, Ashutosh Garg, T...
ICMI
2005
Springer
170views Biometrics» more  ICMI 2005»
13 years 10 months ago
Inferring body pose using speech content
Untethered multimodal interfaces are more attractive than tethered ones because they are more natural and expressive for interaction. Such interfaces usually require robust vision...
Sy Bor Wang, David Demirdjian
ACMACE
2007
ACM
13 years 9 months ago
Gaze-based infotainment agents
We propose an infotainment presentation system that relies on eye gaze as an intuitive and unobtrusive input modality. The system analyzes eye movements in real-time to infer user...
Helmut Prendinger, Tobias Eichner, Elisabeth Andr&...