Sciweavers

Share
ACL
2009

Summarizing multiple spoken documents: finding evidence from untranscribed audio

8 years 6 months ago
Summarizing multiple spoken documents: finding evidence from untranscribed audio
This paper presents a model for summarizing multiple untranscribed spoken documents. Without assuming the availability of transcripts, the model modifies a recently proposed unsupervised algorithm to detect re-occurring acoustic patterns in speech and uses them to estimate similarities between utterances, which are in turn used to identify salient utterances and remove redundancies. This model is of interest due to its independence from spoken language transcription, an error-prone and resource-intensive process, its ability to integrate multiple sources of information on the same topic, and its novel use of acoustic patterns that extends previous work on low-level prosodic feature detection. We compare the performance of this model with that achieved using manual and automatic transcripts, and find that this new approach is roughly equivalent to having access to ASR transcripts with word error rates in the 33
Xiaodan Zhu, Gerald Penn, Frank Rudzicz
Added 16 Feb 2011
Updated 16 Feb 2011
Type Journal
Year 2009
Where ACL
Authors Xiaodan Zhu, Gerald Penn, Frank Rudzicz
Comments (0)
books