Sciweavers

231 search results - page 28 / 47
» Recognition of Gestures in the Context of Speech
Sort
View
INTERACT
2003
14 years 11 months ago
Designing and Prototyping Multimodal Commands
Abstract: Designing and implementing multimodal applications that take advantage of several recognitionbased interaction techniques (e.g. speech and gesture recognition) is a diffi...
Marie-Luce Bourguet
CSL
2007
Springer
14 years 9 months ago
Soft indexing of speech content for search in spoken documents
The paper presents the Position Specific Posterior Lattice (PSPL), a novel lossy representation of automatic speech recognition lattices that naturally lends itself to efficient ...
Ciprian Chelba, Jorge Silva, Alex Acero
ICMI
2004
Springer
189views Biometrics» more  ICMI 2004»
15 years 3 months ago
A multimodal learning interface for sketch, speak and point creation of a schedule chart
We present a video demonstration of an agent-based test bed application for ongoing research into multi-user, multimodal, computer-assisted meetings. The system tracks a two perso...
Edward C. Kaiser, David Demirdjian, Alexander Grue...
ISER
2004
Springer
142views Robotics» more  ISER 2004»
15 years 3 months ago
Interactive Multi-Modal Robot Programming
As robots enter the human environment and come in contact with inexperienced users, they need to be able to interact with users in a multi-modal fashion—keyboard and mouse are n...
Soshi Iba, Christiaan J. J. Paredis, Pradeep K. Kh...
IUI
2009
ACM
15 years 6 months ago
Positive effects of redundant descriptions in an interactive semantic speech interface
Spoken language interfaces based on interactive semantic language models [16, 14] allow probabilities for hypothesized words to be conditioned on the semantic interpretation of th...
Lane Schwartz, Luan Nguyen, Andrew Exley, William ...