Sciweavers

Share
ICMCS
2009
IEEE

Scalable HMM based inference engine in large vocabulary continuous speech recognition

8 years 1 months ago
Scalable HMM based inference engine in large vocabulary continuous speech recognition
Parallel scalability allows an application to efficiently utilize an increasing number of processing elements. In this paper we explore a design space for parallel scalability for an inference engine in large vocabulary continuous speech recognition (LVCSR). Our implementation of the inference engine involves a parallel graph traversal through an irregular graphbased knowledge network with millions of states and arcs. The challenge is not only to define a software architecture that exposes sufficient fine-grained application concurrency, but also to efficiently synchronize between an increasing number of concurrent tasks and to effectively utilize parallelism opportunities in today's highly parallel processors. We propose four application-level implementation alternatives we call "algorithm styles" and construct highly optimized implementations on two parallel platforms: an Intel Core i7 multicore processor and a NVIDIA GTX280 manycore processor. The highest performing ...
Jike Chong, Kisun You, Youngmin Yi, Ekaterina Goni
Added 19 Feb 2011
Updated 19 Feb 2011
Type Journal
Year 2009
Where ICMCS
Authors Jike Chong, Kisun You, Youngmin Yi, Ekaterina Gonina, Christopher Hughes, Wonyong Sung, Kurt Keutzer
Comments (0)
books