Sciweavers

Share
AAAI
2015

Parallel Gaussian Process Regression for Big Data: Low-Rank Representation Meets Markov Approximation

3 years 6 months ago
Parallel Gaussian Process Regression for Big Data: Low-Rank Representation Meets Markov Approximation
The expressive power of a Gaussian process (GP) model comes at a cost of poor scalability in the data size. To improve its scalability, this paper presents a low-rank-cum-Markov approximation (LMA) of the GP model that is novel in leveraging the dual computational advantages stemming from complementing a low-rank approximate representation of the full-rank GP based on a support set of inputs with a Markov approximation of the resulting residual process; the latter approximation is guaranteed to be closest in the Kullback-Leibler distance criterion subject to some constraint and is considerably more refined than that of existing sparse GP models utilizing low-rank representations due to its more relaxed conditional independence assumption (especially with larger data). As a result, our LMA method can trade off between the size of the support set and the order of the Markov property to (a) incur lower computational cost than such sparse GP models while achieving predictive performance ...
Kian Hsiang Low, Jiangbo Yu, Jie Chen, Patrick Jai
Added 27 Mar 2016
Updated 27 Mar 2016
Type Journal
Year 2015
Where AAAI
Authors Kian Hsiang Low, Jiangbo Yu, Jie Chen, Patrick Jaillet
Comments (0)
books