Sciweavers

106 search results - page 10 / 22
» Multimodal event parsing for intelligent user interfaces
Sort
View
ICIA
2007
15 years 2 months ago
Eye Gaze for Attention Prediction in Multimodal Human-Machine Conversation
In a conversational system, determining a user’s focus of attention is crucial to the success of the system. Motivated by previous psycholinguistic findings, we are currently e...
Zahar Prasov, Joyce Yue Chai, Hogyeong Jeong
95
Voted
AAAI
2000
15 years 1 months ago
DMML: An XML Language for Interacting with Multi-Modal Dialog Systems
We present Dialog Moves Markup Language (DMML): an extensible markup language (XML) representation of modality independent communicative acts of automated conversational agents. I...
Nanda Kambhatla, Malgorzata Budzikowska, Sylvie Le...
128
Voted
IUI
2006
ACM
15 years 6 months ago
Augmenting kitchen appliances with a shared context using knowledge about daily events
Networked appliances can simplify our lives, but interacting with them can be difficult in itself. KitchenSense is an early prototype of a networked kitchen full of sensors that u...
Chia-Hsun Jackie Lee, Leonardo Bonanni, José...
74
Voted
UIST
1994
ACM
15 years 4 months ago
Extending a Graphical Toolkit for Two-handed Interaction
Multimodal interaction combines input from multiple sensors such as pointing devices or speech recognition systems, in order to achieve more fluid and natural interaction. Twohand...
Stéphane Chatty
97
Voted
CORR
2007
Springer
128views Education» more  CORR 2007»
15 years 16 days ago
Ambient Multimodality: an Asset for Developing Universal Access to the Information Society
Our aim is to point out the benefits that can be derived from research advances in the implementation of concepts such as ambient intelligence and ubiquitous/pervasive computing f...
Noelle Carbonell