Sciweavers

Share
CHI
2004
ACM

Human-robot speech interface understanding inexplicit utterances using vision

12 years 2 days ago
Human-robot speech interface understanding inexplicit utterances using vision
Speech interfaces should have a capability of dealing with inexplicit utterances including such as ellipsis and deixis since they are common phenomena in our daily conversation. Their resolution using context and a priori knowledge has been investigated in the fields of natural language and speech understanding. However, there are utterances that cannot be understood by such symbol processing alone. In this paper, we consider inexplicit utterances caused from the fact that humans have vision. If we are certain that the listeners share some visual information, we often omit or mention ambiguously things about it in our utterances. We propose a method of understanding speech with such ambiguities using computer vision. It tracks the human's gaze direction, detecting objects in the direction. It also recognizes the human's actions. Based on these bits of visual information, it understands the human's inexplicit utterances. Experimental results show that the method helps to...
Zaliyana Mohd Hanafiah, Chizu Yamazaki, Akio Nakam
Added 01 Dec 2009
Updated 01 Dec 2009
Type Conference
Year 2004
Where CHI
Authors Zaliyana Mohd Hanafiah, Chizu Yamazaki, Akio Nakamura, Yoshinori Kuno
Comments (0)
books