This paper describes an e-learning interface with multiple tutoring character agents. The character agents use eye movement information to facilitate empathy-relevant reasoning and...
Hua Wang, Jie Yang, Mark H. Chignell, Mitsuru Ishi...
A novel interface system for accessing geospatial data (GeoMIP) has been developed that realizes a user-centered multimodal speech/gesture interface for addressing some of the cri...
This paper proposes a framework in which end-users can instantaneously modify existing Web applications by introducing multimodal user-interface. The authors use the IntelligentPa...
Interaction designers are increasingly faced with the challenge of creating interfaces that incorporate multiple input modalities, such as pen and speech, and span multiple device...
Multimodal interaction combines input from multiple sensors such as pointing devices or speech recognition systems, in order to achieve more fluid and natural interaction. Twohand...