Sciweavers

ACL
2015

Reducing infrequent-token perplexity via variational corpora

8 years 15 days ago
Reducing infrequent-token perplexity via variational corpora
Recurrent neural network (RNN) is recognized as a powerful language model (LM). We investigate deeper into its performance portfolio, which performs well on frequent grammatical patterns but much less so on less frequent terms. Such portfolio is expected and desirable in applications like autocomplete, but is less useful in social content analysis where many creative, unexpected usages occur (e.g., URL insertion). We adapt a generic RNN model and show that, with variational training corpora and epoch unfolding, the model improves its performance for the task of URL insertion suggestions.
Yusheng Xie, Pranjal Daga, Yu Cheng, Kunpeng Zhang
Added 13 Apr 2016
Updated 13 Apr 2016
Type Journal
Year 2015
Where ACL
Authors Yusheng Xie, Pranjal Daga, Yu Cheng, Kunpeng Zhang, Ankit Agrawal, Alok N. Choudhary
Comments (0)