We present a generalization of the nonnegative matrix factorization (NMF), where a multilayer generative network with nonnegative weights is used to approximate the observed nonne...
We present a family of incremental Perceptron-like algorithms (PLAs) with margin in which both the "effective" learning rate, defined as the ratio of the learning rate t...
Co-training is a semi-supervised learning paradigm which trains two learners respectively from two different views and lets the learners label some unlabeled examples for each oth...
Causal independence modelling is a well-known method both for reducing the size of probability tables and for explaining the underlying mechanisms in Bayesian networks. In this pap...
: Bagging (Bootstrap Aggregating) has been proved to be a useful, effective and simple ensemble learning methodology. In generic bagging methods, all the classifiers which are trai...