Sciweavers

Share
ICASSP
2009
IEEE

Leveraging multiple query logs to improve language models for spoken query recognition

10 years 4 months ago
Leveraging multiple query logs to improve language models for spoken query recognition
A voice search system requires a speech interface that can correctly recognize spoken queries uttered by users. The recognition performance strongly relies on a robust language model. In this work, we present the use of multiple data sources, with the focus on query logs, in improving ASR language models for a voice search application. Our contributions are three folds: (1) the use of text queries from web search and mobile search in language modeling; (2) the use of web click data to predict query forms from business listing forms; and (3) the use of voice query logs in creating a positive feedback loop. Experiments show that by leveraging these resources, we can achieve recognition performance comparable to, or even better than, that of a previously deploy system where a large amount of spoken query transcripts are used in language modeling.
Xiao Li, Patrick Nguyen, Geoffrey Zweig, Dan Bohus
Added 21 May 2010
Updated 21 May 2010
Type Conference
Year 2009
Where ICASSP
Authors Xiao Li, Patrick Nguyen, Geoffrey Zweig, Dan Bohus
Comments (0)
books