Sciweavers

262 search results - page 14 / 53
» The Integrality of Speech in Multimodal Interfaces
Sort
View
128
Voted
HCI
2009
14 years 11 months ago
Using 3D Touch Interaction for a Multimodal Zoomable User Interface
Touchscreens are becoming the preferred input device in a growing number of applications. They are interesting devices which are more and more introduced into the automotive domain...
Florian Laquai, Markus Ablaßmeier, Tony Poit...
SAC
2004
ACM
15 years 7 months ago
Knowledge-based conversational agents and virtual storytelling
Abstract. We describe an architecture for building speech-enabled conversational agents, deployed as self-contained Web services, with ability to provide inference processing on ve...
Paul Tarau, Elizabeth Figa
113
Voted
ICMI
2004
Springer
128views Biometrics» more  ICMI 2004»
15 years 7 months ago
GroupMedia: distributed multi-modal interfaces
In this paper, we describe the GroupMedia system, which uses wireless wearable computers to measure audio features, headmovement, and galvanic skin response (GSR) for dyads and gr...
Anmol Madan, Ron Caneel, Alex Pentland
104
Voted
CHI
2006
ACM
16 years 2 months ago
Speech pen: predictive handwriting based on ambient multimodal recognition
It is tedious to handwrite long passages of text by hand. To make this process more efficient, we propose predictive handwriting that provides input predictions when the user writ...
Kazutaka Kurihara, Masataka Goto, Jun Ogata, Takeo...
89
Voted
IUI
2003
ACM
15 years 7 months ago
Intelligent dialog overcomes speech technology limitations: the SENECa example
We present a primarily speech-based user interface to a wide range of entertainment, navigation and communication applications for use in vehicles. The multimodal dialog enables t...
Wolfgang Minker, Udo Haiber, Paul Heisterkamp, Sve...