Sciweavers

1474 search results - page 165 / 295
» Using Machine Learning to Focus Iterative Optimization
Sort
View
KDD
2010
ACM
274views Data Mining» more  KDD 2010»
15 years 7 months ago
Grafting-light: fast, incremental feature selection and structure learning of Markov random fields
Feature selection is an important task in order to achieve better generalizability in high dimensional learning, and structure learning of Markov random fields (MRFs) can automat...
Jun Zhu, Ni Lao, Eric P. Xing
CISS
2008
IEEE
15 years 9 months ago
Subgradient methods in network resource allocation: Rate analysis
— We consider dual subgradient methods for solving (nonsmooth) convex constrained optimization problems. Our focus is on generating approximate primal solutions with performance ...
Angelia Nedic, Asuman E. Ozdaglar
ICML
2007
IEEE
16 years 4 months ago
Classifying matrices with a spectral regularization
We propose a method for the classification of matrices. We use a linear classifier with a novel regularization scheme based on the spectral 1-norm of its coefficient matrix. The s...
Ryota Tomioka, Kazuyuki Aihara
ICML
2006
IEEE
16 years 4 months ago
Robust Euclidean embedding
We derive a robust Euclidean embedding procedure based on semidefinite programming that may be used in place of the popular classical multidimensional scaling (cMDS) algorithm. We...
Lawrence Cayton, Sanjoy Dasgupta
ICML
2004
IEEE
15 years 8 months ago
Gradient LASSO for feature selection
LASSO (Least Absolute Shrinkage and Selection Operator) is a useful tool to achieve the shrinkage and variable selection simultaneously. Since LASSO uses the L1 penalty, the optim...
Yongdai Kim, Jinseog Kim