Sciweavers

52 search results - page 2 / 11
» Integrating multiple cues for spoken language understanding
Sort
View
HRI
2010
ACM
13 years 11 months ago
Robust spoken instruction understanding for HRI
—Natural human-robot interaction requires different and more robust models of language understanding (NLU) than non-embodied NLU systems. In particular, architectures are require...
Rehj Cantrell, Matthias Scheutz, Paul W. Schermerh...
ICASSP
2008
IEEE
13 years 11 months ago
Frame-based acoustic feature integration for speech understanding
With the purpose of improving Spoken Language Understanding (SLU) performance, a combination of different acoustic speech recognition (ASR) systems is proposed. State a-posteriori...
Loic Barrault, Christophe Servan, Driss Matrouf, G...
COLING
2000
13 years 6 months ago
Deixis and Conjunction in Multimodal Systems
In order to realize their full potential, multimodal interfaces need to support not just input from multiple modes, but single commands optimally distributed across the available ...
Michael Johnston
ICCV
2011
IEEE
12 years 4 months ago
Manhattan Scene Understanding Using Monocular, Stereo, and 3D Features
This paper addresses scene understanding in the context of a moving camera, integrating semantic reasoning ideas from monocular vision with 3D information available through struct...
Alex Flint, David Murray, Ian Reid
EACL
2006
ACL Anthology
13 years 6 months ago
What's There to Talk About? A Multi-Modal Model of Referring Behavior in the Presence of Shared Visual Information
This paper describes the development of a rule-based computational model that describes how a feature-based representation of shared visual information combines with linguistic cu...
Darren Gergle