×

zbMATH — the first resource for mathematics

The parity problem in the presence of noise, decoding random linear codes, and the subset sum problem. (English) Zbl 1142.68399
Chekuri, Chandra (ed.) et al., Approximation, randomization and combinatorial optimization. Algorithms and techniques. 8th international workshop on approximation algorithms for combinatorial optimization problems, APPROX 2005, and 9th international workshop on randomization and computation, RANDOM 2005, Berkeley, CA, USA, August 22–24, 2005. Proceedings. Berlin: Springer (ISBN 3-540-28239-4/pbk). Lecture Notes in Computer Science 3624, 378-389 (2005).
Summary: A. Blum et al. [J. ACM 50, 506–519 (2003)] demonstrated the first sub-exponential algorithm for learning the parity function in the presence of noise. They solved the length-\(n\) parity problem in time \(2^{O(n/\log n)}\) but it required the availability of \(2^{O(n/\log n)}\) labeled examples. As an open problem, they asked whether there exists a \(2^{o (n)}\) algorithm for the length-n parity problem that uses only \(\text{poly}(n)\) labeled examples. In this work, we provide a positive answer to this question. We show that there is an algorithm that solves the length-\(n\) parity problem in time \(2^{O(n/\log\log n)}\) using \(n^{1+\varepsilon}\) labeled examples. This result immediately gives us a sub-exponential algorithm for decoding \(n \times n^{1+\varepsilon}\) random binary linear codes (i.e. codes where the messages are \(n\) bits and the codewords are \(n^{1+\varepsilon}\) bits) in the presence of random noise. We are also able to extend the same techniques to provide a sub-exponential algorithm for dense instances of the random subset sum problem.
For the entire collection see [Zbl 1087.90002].

MSC:
68Q32 Computational learning theory
68T05 Learning and adaptive systems in artificial intelligence
11B05 Density, gaps, topology
11Y16 Number-theoretic algorithms; complexity
PDF BibTeX XML Cite
Full Text: DOI