Sciweavers

231 search results - page 22 / 47
» Recognition of Gestures in the Context of Speech
Sort
View
HCI
2009
14 years 7 months ago
Did I Get It Right: Head Gestures Analysis for Human-Machine Interactions
This paper presents a system for another input modality in a multimodal human-machine interaction scenario. In addition to other common input modalities, e.g. speech, we extract he...
Jürgen Gast, Alexander Bannat, Tobias Rehrl, ...
ICASSP
2008
IEEE
15 years 4 months ago
Multimodal information fusion using the iterative decoding algorithm and its application to audio-visual speech recognition
The fusion of information from heterogenous sensors is crucial to the effectiveness of a multimodal system. Noise affect the sensors of different modalities independently. A good ...
Shankar T. Shivappa, Bhaskar D. Rao, Mohan M. Triv...
ICMCS
2005
IEEE
112views Multimedia» more  ICMCS 2005»
15 years 3 months ago
Segment-based approach to the recognition of emotions in speech
A new framework for the context and speaker independent recognition of emotions from voice, based on a richer and more natural representation of the speech signal, is proposed. Th...
Mohammad T. Shami, Mohamed S. Kamel
ICASSP
2011
IEEE
14 years 1 months ago
Amplitude modulation spectrogram based features for robust speech recognition in noisy and reverberant environments
In this contribution we present a feature extraction method that relies on the modulation-spectral analysis of amplitude fluctuations within sub-bands of the acoustic spectrum by ...
Niko Moritz, Jörn Anemüller, Birger Koll...
CVPR
1998
IEEE
15 years 11 months ago
Action Recognition Using Probabilistic Parsing
A new approach to the recognition of temporal behaviors and activities is presented. The fundamental idea, inspired by work in speech recognition, is to divide the inference probl...
Aaron F. Bobick, Yuri A. Ivanov