Sciweavers

180 search results - page 17 / 36
» On the Convergence Rate of Good-Turing Estimators
Sort
View
112
Voted
COLT
2008
Springer
15 years 2 months ago
More Efficient Internal-Regret-Minimizing Algorithms
Standard no-internal-regret (NIR) algorithms compute a fixed point of a matrix, and hence typically require O(n3 ) run time per round of learning, where n is the dimensionality of...
Amy R. Greenwald, Zheng Li, Warren Schudy
97
Voted
CSDA
2007
114views more  CSDA 2007»
15 years 13 days ago
Relaxed Lasso
The Lasso is an attractive regularisation method for high dimensional regression. It combines variable selection with an efficient computational procedure. However, the rate of co...
Nicolai Meinshausen
127
Voted
BMCBI
2006
175views more  BMCBI 2006»
15 years 16 days ago
Parameter estimation for stiff equations of biosystems using radial basis function networks
Background: The modeling of dynamic systems requires estimating kinetic parameters from experimentally measured time-courses. Conventional global optimization methods used for par...
Yoshiya Matsubara, Shinichi Kikuchi, Masahiro Sugi...
101
Voted
ICML
2007
IEEE
16 years 1 months ago
Manifold-adaptive dimension estimation
Intuitively, learning should be easier when the data points lie on a low-dimensional submanifold of the input space. Recently there has been a growing interest in algorithms that ...
Amir Massoud Farahmand, Csaba Szepesvári, J...
98
Voted
ICASSP
2009
IEEE
15 years 7 months ago
Improved subspace DoA estimation methods with large arrays: The deterministic signals case
This paper is devoted to the subspace DoA estimation using a large antennas array when the number of available snapshots is of the same order of magnitude than the number of senso...
Pascal Vallet, Philippe Loubaton, Xavier Mestre