Join Our Newsletter

Free Online Productivity Tools
i2Speak
i2Symbol
i2OCR
iTex2Img
iWeb2Print
iWeb2Shot
i2Type
iPdf2Split
iPdf2Merge
i2Bopomofo
i2Pinyin
i2Cantonese
i2Cangjie
i2Arabic
i2Style
i2Image
i2PDF
iLatex2Rtf
Sci2ools

FOCS

2006

IEEE

2006

IEEE

Learning an unknown halfspace (also called a perceptron) from labeled examples is one of the classic problems in machine learning. In the noise-free case, when a halfspace consistent with all the training examples exists, the problem can be solved in polynomial time using linear programming. However, under the promise that a halfspace consistent with a fraction (1 − ε) of the examples exists (for some small constant ε > 0), it was not known how to eﬃciently ﬁnd a halfspace that is correct on even 51% of the examples. Nor was a hardness result that ruled out getting agreement on more than 99.9% of the examples known. In this work, we close this gap in our understanding, and prove that even a tiny amount of worst-case noise makes the problem of learning halfspaces intractable in a strong sense. Speciﬁcally, for arbitrary ε, δ > 0, we prove that given a set of examples-label pairs from the hypercube a fraction (1 − ε) of which can be explained by a halfspace, it is N...

Related Content

Added |
11 Jun 2010 |

Updated |
11 Jun 2010 |

Type |
Conference |

Year |
2006 |

Where |
FOCS |

Authors |
Venkatesan Guruswami, Prasad Raghavendra |

Comments (0)