Sciweavers

421 search results - page 42 / 85
» Incorporating Test Inputs into Learning
Sort
View
72
Voted
NIPS
2008
14 years 11 months ago
Weighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in learning
Randomized neural networks are immortalized in this well-known AI Koan: In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6. "What a...
Ali Rahimi, Benjamin Recht
GPEM
2002
104views more  GPEM 2002»
14 years 9 months ago
Genetic Programming-based Construction of Features for Machine Learning and Knowledge Discovery Tasks
In this paper we use genetic programming for changing the representation of the input data for machine learners. In particular, the topic of interest here is feature construction i...
Krzysztof Krawiec
PRL
2011
14 years 4 months ago
Consistency of functional learning methods based on derivatives
In some real world applications, such as spectrometry, functional models achieve better predictive performances if they work on the derivatives of order m of their inputs rather t...
Fabrice Rossi, Nathalie Villa-Vialaneix
INTERSPEECH
2010
14 years 4 months ago
Learning from human errors: prediction of phoneme confusions based on modified ASR training
In an attempt to improve models of human perception, the recognition of phonemes in nonsense utterances was predicted with automatic speech recognition (ASR) in order to analyze i...
Bernd T. Meyer, Birger Kollmeier
84
Voted
NIPS
1997
14 years 11 months ago
Learning Generative Models with the Up-Propagation Algorithm
Up-propagation is an algorithm for inverting and learning neural network generative models. Sensory input is processed by inverting a model that generates patterns from hidden var...
Jong-Hoon Oh, H. Sebastian Seung