Sciweavers

180 search results - page 17 / 36
» On the Convergence Rate of Good-Turing Estimators
Sort
View
COLT
2008
Springer
15 years 13 days ago
More Efficient Internal-Regret-Minimizing Algorithms
Standard no-internal-regret (NIR) algorithms compute a fixed point of a matrix, and hence typically require O(n3 ) run time per round of learning, where n is the dimensionality of...
Amy R. Greenwald, Zheng Li, Warren Schudy
CSDA
2007
114views more  CSDA 2007»
14 years 10 months ago
Relaxed Lasso
The Lasso is an attractive regularisation method for high dimensional regression. It combines variable selection with an efficient computational procedure. However, the rate of co...
Nicolai Meinshausen
BMCBI
2006
175views more  BMCBI 2006»
14 years 10 months ago
Parameter estimation for stiff equations of biosystems using radial basis function networks
Background: The modeling of dynamic systems requires estimating kinetic parameters from experimentally measured time-courses. Conventional global optimization methods used for par...
Yoshiya Matsubara, Shinichi Kikuchi, Masahiro Sugi...
ICML
2007
IEEE
15 years 11 months ago
Manifold-adaptive dimension estimation
Intuitively, learning should be easier when the data points lie on a low-dimensional submanifold of the input space. Recently there has been a growing interest in algorithms that ...
Amir Massoud Farahmand, Csaba Szepesvári, J...
ICASSP
2009
IEEE
15 years 5 months ago
Improved subspace DoA estimation methods with large arrays: The deterministic signals case
This paper is devoted to the subspace DoA estimation using a large antennas array when the number of available snapshots is of the same order of magnitude than the number of senso...
Pascal Vallet, Philippe Loubaton, Xavier Mestre