Sciweavers

159 search results - page 7 / 32
» Margin Maximizing Loss Functions
Sort
View
AAAI
2000
15 years 1 months ago
A Unified Bias-Variance Decomposition for Zero-One and Squared Loss
The bias-variance decomposition is a very useful and widely-used tool for understanding machine-learning algorithms. It was originally developed for squared loss. In recent years,...
Pedro Domingos
JMLR
2010
105views more  JMLR 2010»
14 years 6 months ago
Classification Methods with Reject Option Based on Convex Risk Minimization
In this paper, we investigate the problem of binary classification with a reject option in which one can withhold the decision of classifying an observation at a cost lower than t...
Ming Yuan, Marten H. Wegkamp
AAAI
2010
14 years 9 months ago
Non-I.I.D. Multi-Instance Dimensionality Reduction by Learning a Maximum Bag Margin Subspace
Multi-instance learning, as other machine learning tasks, also suffers from the curse of dimensionality. Although dimensionality reduction methods have been investigated for many ...
Wei Ping, Ye Xu, Kexin Ren, Chi-Hung Chi, Shen Fur...
102
Voted
EPEW
2006
Springer
15 years 3 months ago
Explicit Inverse Characterizations of Acyclic MAPs of Second Order
This paper shows how to construct a Markovian arrival process of second order from information on the marginal distribution and on its autocorrelation function. More precisely, clo...
Armin Heindl, Gábor Horváth, Karsten...
ECAI
2004
Springer
15 years 5 months ago
A Generalized Quadratic Loss for Support Vector Machines
The standard SVM formulation for binary classification is based on the Hinge loss function, where errors are considered not correlated. Due to this, local information in the featu...
Filippo Portera, Alessandro Sperduti