×

zbMATH — the first resource for mathematics

Attribute-efficient and non-adaptive learning of parities and DNF expressions. (English) Zbl 1222.68096
Summary: We consider the problems of attribute-efficient PAC learning of two well-studied concept classes: parity functions and DNF expressions over \(\{0,1\}^{n}\). We show that attribute-efficient learning of parities with respect to the uniform distribution is equivalent to decoding high-rate random linear codes from low number of errors, a long-standing open problem in coding theory. This is the first evidence that attribute-efficient learning of a natural PAC learnable concept class can be computationally hard. An algorithm is said to use membership queries (MQs) non-adaptively if the points at which the algorithm asks MQs do not depend on the target concept. Using a simple non-adaptive parity learning algorithm and a modification of Levin’s algorithm for locating a weakly-correlated parity due to N. H. Bshouty, J. C. Jackson and C. Tamon [“More efficient PAC learning of DNF with membership queries under the uniform distribution”, in: Proceedings of the 12th annual conference on computational learning theory. COLT ’99. New York, NY: ACM, 286–295 (1999)], we give the first non-adaptive and attribute-efficient algorithm for learning DNF with respect to the uniform distribution. Our algorithm runs in time \(\tilde{O}(ns^{4}/\epsilon )\) and uses \(\tilde{O}(s^{4} \cdot \log ^{2}n/\epsilon )\) non-adaptive MQs, where \(s\) is the number of terms in the shortest DNF representation of the target concept. The algorithm improves on the best previous algorithm for learning DNF (of [loc. cit.]) and can also be easily modified to tolerate random persistent classification noise in MQs.
Reviewer: Reviewer (Berlin)

MSC:
68Q32 Computational learning theory
PDF BibTeX XML Cite
Full Text: Link