Sciweavers

Share
ICASSP
2009
IEEE

Voice search of structured media data

10 years 4 months ago
Voice search of structured media data
This paper addresses the problem of using unstructured queries to search a structured database in voice search applications. By incorporating structural information in music metadata, the end-toend search error has been reduced by 15% on text queries and up to 11% on spoken queries. Based on that, an HMM sequential rescoring model has reduced the error rate by 28% on text queries and up to 23% on spoken queries compared to the baseline system. Furthermore, a phonetic similarity model has been introduced to compensate speech recognition errors, which has improved the end-to-end search accuracy consistently across different levels of speech recognition accuracy.
Young-In Song, Ye-Yi Wang, Yun-Cheng Ju, Mike Selt
Added 21 May 2010
Updated 21 May 2010
Type Conference
Year 2009
Where ICASSP
Authors Young-In Song, Ye-Yi Wang, Yun-Cheng Ju, Mike Seltzer, Ivan Tashev, Alex Acero
Comments (0)
books