Motivated by and grounded in observations of eye-gaze patterns in human-human dialogue, this study explores using eye-gaze patterns in managing human-computer dialogue. We develop...
This paper introduces City Browser, a prototype multimodal, conversational, spoken language interface for automotive navigational aid and information access. A study designed to e...
Alexander Gruenstein, Bruce Mehler, Bryan Reimer, ...
Information visualisation benefits from the Semantic Web: multimodal mobile interfaces to the Semantic Web offer access to complex knowledge and information structures. Natural l...
In this poster, we propose the design of a multimodal robotic interaction mechanism that is intended to be used by Aphasics for storytelling. Through limited physical interaction,...
In this paper we describe a method to enable a robot to learn how a user gives commands and feedback to it by speech, prosody and touch. We propose a biologically inspired approac...