Sciweavers

106 search results - page 13 / 22
» Multimodal event parsing for intelligent user interfaces
Sort
View
VIP
2003
15 years 1 days ago
Face and Body Gesture Recognition for a Vision-Based Multimodal Analyzer
For the computer to interact intelligently with human users, computers should be able to recognize emotions, by analyzing the human’s affective state, physiology and behavior. I...
Hatice Gunes, Massimo Piccardi, Tony Jan
WSC
1997
15 years 1 days ago
SimTutor: A Multimedia Intelligent Tutoring System for Simulation Modeling
SimTutor is a multimedia intelligent tutoring system (ITS) for simulation modeling. Multimedia systems are now de facto standard on personal computers and increasing number of int...
Tajudeen A. Atolagbe, Vlatka Hlupic
ICMI
2010
Springer
217views Biometrics» more  ICMI 2010»
14 years 8 months ago
Focusing computational visual attention in multi-modal human-robot interaction
Identifying verbally and non-verbally referred-to objects is an important aspect of human-robot interaction. Most importantly, it is essential to achieve a joint focus of attentio...
Boris Schauerte, Gernot A. Fink
ICMI
2004
Springer
116views Biometrics» more  ICMI 2004»
15 years 4 months ago
Towards integrated microplanning of language and iconic gesture for multimodal output
When talking about spatial domains, humans frequently accompany their explanations with iconic gestures to depict what they are referring to. For example, when giving directions, ...
Stefan Kopp, Paul Tepper, Justine Cassell
ICIA
2007
15 years 1 months ago
An Adaptive, Emotional, and Expressive Reminding System
We are currently developing an adaptive, emotional, and expressive interface agent, which learns when and how to notify users about self-assigned tasks and events. In this paper, ...
Nadine Richard, Seiji Yamada