Learning Stochastic Perceptrons Under k-Blocking Distributions

9 years 7 months ago
Learning Stochastic Perceptrons Under k-Blocking Distributions
We present a statistical method that PAC learns the class of stochastic perceptrons with arbitrary monotonic activation function and weights wi {-1, 0, +1} when the probability distribution that generates the input examples is member of a family that we call k-blocking distributions. Such distributions represent an important step beyond the case where each input variable is statistically independent since the 2k-blocking family contains all the Markov distributions of order k. By stochastic perceptron we mean a perceptron which, upon presentation of input vector x, outputs 1 with probability f( i wixi - ). Because the same algorithm works for any monotonic (nondecreasing or nonincreasing) activation function f on Boolean domain, it handles the well studied cases of sigmo
Mario Marchand, Saeed Hadjifaradji
Added 02 Nov 2010
Updated 02 Nov 2010
Type Conference
Year 1994
Where NIPS
Authors Mario Marchand, Saeed Hadjifaradji
Comments (0)