Sciweavers

Share
ECCC
2010

Lower Bounds and Hardness Amplification for Learning Shallow Monotone Formulas

9 years 5 months ago
Lower Bounds and Hardness Amplification for Learning Shallow Monotone Formulas
Much work has been done on learning various classes of "simple" monotone functions under the uniform distribution. In this paper we give the first unconditional lower bounds for learning problems of this sort by showing that polynomial-time algorithms cannot learn constant-depth monotone Boolean formulas under the uniform distribution in the well-studied Statistical Query model. Using a recent characterization of Strong Statistical Query learnability due to Feldman [14], we first show that depth-3 monotone formulas of size no(1) cannot be learned by any polynomial-time Statistical Query algorithm to accuracy 1-1/(log n)(1) . We then build on this result to show that depth-4 monotone formulas of size no(1) cannot be learned even to a certain 1 2 + o(1) accuracy in polynomial time. This improved hardness is achieved using a general technique that we introduce for amplifying the hardness of "mildly hard" learning problems in either the PAC or Statistical Query framewo...
Vitaly Feldman, Homin K. Lee, Rocco A. Servedio
Added 10 Dec 2010
Updated 10 Dec 2010
Type Journal
Year 2010
Where ECCC
Authors Vitaly Feldman, Homin K. Lee, Rocco A. Servedio
Comments (0)
books