Sciweavers

FOCS
2008
IEEE

Hardness of Minimizing and Learning DNF Expressions

13 years 11 months ago
Hardness of Minimizing and Learning DNF Expressions
We study the problem of finding the minimum size DNF formula for a function f : {0, 1}d → {0, 1} given its truth table. We show that unless NP ⊆ DTIME(npoly(log n) ), there is no polynomial time algorithm that approximates this problem to within factor d1−ε where ε > 0 is an arbitrarily small constant. Our result essentially matches the known O(d) approximation for the problem. We also study weak learnability of small size DNF formulas. We show that assuming NP ⊆ RP, for arbitrarily small constant ε > 0 and any fixed positive integer t, a two term DNF cannot be PAC-learnt in polynomial time by a t term DNF to within 1 2 + ε accuracy. Under the same complexity assumption, we show that for arbitrarily small constants µ, ε > 0 and any fixed positive integer t, an AND function (i.e. a single term DNF) cannot be PAC-learnt in polynomial time under adversarial µ-noise by a t-CNF to within 1 2 + ε accuracy.
Subhash Khot, Rishi Saket
Added 29 May 2010
Updated 29 May 2010
Type Conference
Year 2008
Where FOCS
Authors Subhash Khot, Rishi Saket
Comments (0)