Free Online Productivity Tools
i2Speak
i2Symbol
i2OCR
iTex2Img
iWeb2Print
iWeb2Shot
i2Type
iPdf2Split
iPdf2Merge
i2Bopomofo
i2Arabic
i2Style
i2Image
i2PDF
iLatex2Rtf
Sci2ools

ALT

2004

Springer

2004

Springer

We consider a two-layer network algorithm. The ﬁrst layer consists of an uncountable number of linear units. Each linear unit is an LMS algorithm whose inputs are ﬁrst “kernelized.” Each unit is indexed by the value of a parameter corresponding to a parameterized reproducing kernel. The ﬁrst-layer outputs are then connected to an exponential weights algorithm which combines them to produce the ﬁnal output. We give loss bounds for this algorithm; and for speciﬁc applications to prediction relative to the best convex combination of kernels, and the best width of a Gaussian kernel. The algorithm’s predictions require the computation of an expectation which is a quotient of integrals as seen in a variety of Bayesian inference problems. Typically this computational problem is tackled by mcmc, importance sampling, and other sampling techniques for which there are few polynomial time guarantees of the quality of the approximation in general and none for our problem specificall...

Related Content

Added |
15 Mar 2010 |

Updated |
15 Mar 2010 |

Type |
Conference |

Year |
2004 |

Where |
ALT |

Authors |
Mark Herbster |

Comments (0)