Abstract: The nonlinear conjugate gradient method is widely used in unconstrained optimization. However, the line search is very difficult or expensive sometimes. In this paper, we...
—Oja’s principal subspace algorithm is a well-known and powerful technique for learning and tracking principal information in time series. A thorough investigation of the conve...
We discuss the almost-sure convergence of a broad class of sampling algorithms for multi-stage stochastic linear programs. We provide a convergence proof based on the finiteness o...
The Magnus series is an infinite series which arises in the study of linear ordinary differential equations. If the series converges, then the matrix exponential of the sum equals...
In Reinforcement Learning (RL) there has been some experimental evidence that the residual gradient algorithm converges slower than the TD(0) algorithm. In this paper, we use the ...