Sciweavers

193 search results - page 25 / 39
» Approximate Maximum Parsimony and Ancestral Maximum Likeliho...
Sort
View
96
Voted
NIPS
2003
15 years 1 months ago
Wormholes Improve Contrastive Divergence
In models that define probabilities via energies, maximum likelihood learning typically involves using Markov Chain Monte Carlo to sample from the model’s distribution. If the ...
Geoffrey E. Hinton, Max Welling, Andriy Mnih
108
Voted
ICASSP
2010
IEEE
15 years 20 days ago
Large margin estimation of n-gram language models for speech recognition via linear programming
We present a novel discriminative training algorithm for n-gram language models for use in large vocabulary continuous speech recognition. The algorithm uses large margin estimati...
Vladimir Magdin, Hui Jiang
84
Voted
ICASSP
2010
IEEE
15 years 20 days ago
Maximum-likelihood-based cepstral inverse filtering for blind speech dereverberation
Current state-of-the-art speech recognition systems work quite well in controlled environments but their performance degrades severely in realistic acoustical conditions in reverb...
Kshitiz Kumar, Richard M. Stern
CORR
2006
Springer
109views Education» more  CORR 2006»
15 years 15 days ago
Decision Making with Side Information and Unbounded Loss Functions
We consider the problem of decision-making with side information and unbounded loss functions. Inspired by probably approximately correct learning model, we use a slightly differe...
Majid Fozunbal, Ton Kalker
99
Voted
TNN
2008
177views more  TNN 2008»
15 years 11 days ago
Adaptive Importance Sampling to Accelerate Training of a Neural Probabilistic Language Model
Previous work on statistical language modeling has shown that it is possible to train a feed-forward neural network to approximate probabilities over sequences of words, resulting...
Yoshua Bengio, Jean-Sébastien Senecal