Sciweavers

INTERSPEECH
2010

Exploring recognition network representations for efficient speech inference on highly parallel platforms

12 years 11 months ago
Exploring recognition network representations for efficient speech inference on highly parallel platforms
The emergence of highly parallel computing platforms is enabling new trade-offs in algorithm design for automatic speech recognition. It naturally motivates the following investigation: do the most computationally efficient sequential algorithms lead to the most computationally efficient parallel algorithms? In this paper we explore two contending recognition network representations for speech inference engines: the linear lexical model (LLM) and the weighted finite state transducer (WFST). We demonstrate that while an inference engine using the simpler LLM representation evaluates 22
Jike Chong, Ekaterina Gonina, Kisun You, Kurt Keut
Added 18 May 2011
Updated 18 May 2011
Type Journal
Year 2010
Where INTERSPEECH
Authors Jike Chong, Ekaterina Gonina, Kisun You, Kurt Keutzer
Comments (0)