Sciweavers

262 search results - page 18 / 53
» The Integrality of Speech in Multimodal Interfaces
Sort
View
PRL
2002
94views more  PRL 2002»
15 years 1 months ago
A hierarchical tag-graph search scheme with layered grammar rules for spontaneous speech understanding
It has always been difficult for language understanding systems to handle spontaneous speech with satisfactory robustness, primarily due to such problems as the fragments, disflue...
Bor-shen Lin, Berlin Chen, Hsin-Min Wang, Lin-Shan...
AVI
2008
15 years 4 months ago
Scenique: a multimodal image retrieval interface
Searching for images by using low-level visual features, such as color and texture, is known to be a powerful, yet imprecise, retrieval paradigm. The same is true if search relies...
Ilaria Bartolini, Paolo Ciaccia
119
Voted
ICMI
2005
Springer
164views Biometrics» more  ICMI 2005»
15 years 7 months ago
A user interface framework for multimodal VR interactions
This article presents a User Interface (UI) framework for multimodal interactions targeted at immersive virtual environments. Its configurable input and gesture processing compon...
Marc Erich Latoschik
CLEAR
2006
Springer
158views Biometrics» more  CLEAR 2006»
15 years 5 months ago
Audio, Video and Multimodal Person Identification in a Smart Room
In this paper, we address the modality integration issue on the example of a smart room environment aiming at enabling person identification by combining acoustic features and 2D f...
Jordi Luque, Ramon Morros, Ainara Garde, Jan Angui...
IVA
2010
Springer
15 years 10 days ago
Realizing Multimodal Behavior - Closing the Gap between Behavior Planning and Embodied Agent Presentation
Abstract. Generating coordinated multimodal behavior for an embodied agent (speech, gesture, facial expression. . . ) is challenging. It requires a high degree of animation control...
Michael Kipp, Alexis Heloir, Marc Schröder, P...