Sciweavers

30 search results - page 3 / 6
» Disambiguating speech commands using physical context
Sort
View
CHI
2003
ACM
14 years 6 months ago
Hands on cooking: towards an attentive kitchen
To make human computer interaction more transparent, different modes of communication need to be explored. We present eyeCOOK, a multimodal attentive cookbook to help a non-expert...
Jeremy S. Bradbury, Jeffrey S. Shell, Craig B. Kno...
ICCV
2005
IEEE
13 years 12 months ago
Visual Speech Recognition with Loosely Synchronized Feature Streams
We present an approach to detecting and recognizing spoken isolated phrases based solely on visual input. We adopt an architecture that first employs discriminative detection of ...
Kate Saenko, Karen Livescu, Michael Siracusa, Kevi...
ECAI
2008
Springer
13 years 8 months ago
Salience-driven Contextual Priming of Speech Recognition for Human-Robot Interaction
Abstract. The paper presents an implemented model for priming speech recognition, using contextual information about salient entities. The underlying hypothesis is that, in human-r...
Pierre Lison, Geert-Jan M. Kruijff
ISMAR
2006
IEEE
14 years 11 days ago
"Move the couch where?" : developing an augmented reality multimodal interface
This paper describes an augmented reality (AR) multimodal interface that uses speech and paddle gestures for interaction. The application allows users to intuitively arrange virtu...
Sylvia Irawati, Scott Green, Mark Billinghurst, An...
NORDICHI
2004
ACM
13 years 11 months ago
Adaptivity in speech-based multilingual e-mail client
In speech interfaces users must be aware what can be done with the system – in other words, the system must provide information to help the users to know what to say. We have ad...
Esa-Pekka Salonen, Mikko Hartikainen, Markku Turun...