an:07174543 Zbl 1443.03022 Bienvenu, Laurent; Csima, Barbara F.; Harrison-Trainor, Matthew Optimal bounds for single-source Kolmogorov extractors EN Trans. Am. Math. Soc. 373, No. 3, 1983-2006 (2020). 00447133 2020
j
03D32 05C80 68Q30 randomness extraction; Kolmogorov complexity; effective packing dimension; hypergraphs The Kolmogorov complexity $$C(\sigma)$$ of a finite binary string $$\sigma$$ measures its information content. Formally, it is defined as the length of the shortest program which, when run on a previously fixed universal interpreter, produces the string in question. As the choice of the universal interpreter is arbitrary within a certain class of admissible interpreters, this number is only meaningful up to a positive constant. This is why this notion is most interesting when applied to {\em in}finite binary sequences $$X$$ by studying how fast Kolmogorov complexity approaches infinity when looking at longer and longer finite initial segments of such a sequence. For infinite sequences with strong regularities (for instance, computable sequences, that is, sequences output by some algorithm) the growth rate will be quite low, meaning that the information content of the sequence becomes negligible in the limit. For other infinite sequences without regularities, such as the typical result of infinitely often tossing a fair coin, the Kolmogorov complexity of each initial segment might be roughly equal to its length. Thus, intuitively, we can think of these two kinds of sequences as possessing 0\% and 100\%, respectively, of what we might informally call information density. That density can easily be diluted, for example if we intersperse a sequence of high density with another one of low density according to some systematic pattern; for example, after each three bits from the one sequence we might insert one bit from the other. The inverse problem is more interesting: If we have a sequence of information density strictly between 0\% and 100\%, can we algorithmically boil it down to a sequence of strictly higher density? This is known as the problem of randomness extraction and is the subject of the present article. Of course, if we write $$X {\upharpoonright} n$$ for the first $$n$$ bits of an infinite binary sequence $$X$$, $$\lim_{n\rightarrow \infty} C(X{\upharpoonright} n)/n$$ need not exist. Thus, it is necessary to look at the {\em effective Hausdorff dimension} $$\mathrm{dim}(X)=\liminf_{n\rightarrow \infty} C(X{\upharpoonright} n)/n$$ or the {\em effective packing dimension} $$\mathrm{Dim}(X)=\limsup_{n\rightarrow \infty} C(X{\upharpoonright} n)/n$$ instead. Then the question above becomes: Are there algorithms which can be run on infinite sequences $$X$$ and which are guaranteed to produce as output infinite sequences $$Y$$ such that $$\mathrm{dim}(Y)>\mathrm{dim}(X)$$ or such that $$\mathrm{Dim}(Y)>\mathrm{Dim}(X)$$? In the case of $$\mathrm{dim}$$, the answer is negative: \textit{J. S. Miller} [Adv. Math. 226, No. 1, 373--384 (2011; Zbl 1214.03030)] showed that for every possible value of $$\mathrm{dim}(X)$$ there is some $$X$$ for which no $$Y$$ as above can be produced from $$X$$ algorithmically. But the case of $$\mathrm{Dim}$$ is very different, as \textit{L. Fortnow} et al. [Lect. Notes Comput. Sci. 4051, 335--345 (2006; Zbl 1223.68060)] used results from complexity theory to show that from every infinite sequence $$X$$ with $$\mathrm{Dim}(X)>0$$ a sequence $$Y$$ with $$\mathrm{Dim}(Y)$$ arbitrarily close to $$1$$ can be extracted. Their extraction procedure is based on almost uniform'' extraction for {\em finite} strings; namely, they show that for every $$0< \alpha<\beta <1$$ there is an extraction function $$E$$ which works on finite strings of information density at least $$\alpha$$ to extract another finite string of information density at least $$\beta$$ if provided with some additional finite advice. \textit{M. Zimand} [Symmetry of information and bounds on nonuniform randomness extraction via Kolmogorov extractors'', in: 26th Annual IEEE conference on computational complexity. Los Alamitos, CA: IEEE Computer Society. 148--156 (2011; \url{doi:10.1109/CCC.2011.21})] made more exact how much advice is needed for which values of $$\alpha$$ and $$\beta$$. The present paper improves Zimand's results. Its contents can be summarized as follows: For $$k \geq 1$$, the authors define the set $$\mathsf{EXT}(k)$$ as those pairs $$(\alpha,\beta)$$ for which the extraction works by $$k$$ different total computable functions (which corresponds to requiring roughly $$\log_2 k$$ bits of advice). In other words, $$(\alpha,\beta) \in \mathsf{EXT}(k)$$ if there is a function $$f\colon \mathbb{N}\rightarrow \mathbb{N}$$ and an algorithm which, for all $$n$$, on a string $$\sigma$$ of length $$f(n)$$ as input, produces $$k$$ strings of length $$n$$ as output, and which has the property that if $$\sigma$$ had information density at least $$\alpha$$, then {\em at least one} of the $$k$$ output strings has information density at least $$\beta$$. The first main result of the article is an exact characterization of $$\mathsf{EXT}(k)$$ as follows: If $$k=1$$, then $$(\alpha,\beta) \in \mathsf{EXT}(k)$$ if and only if $$\beta \leq \alpha$$, and therefore no meaningful randomness extraction is possible in this case. For $$k \geq 2$$, $$(\alpha,\beta) \in \mathsf{EXT}(k)$$ if and only if either $$\alpha=\beta\in\{0,1\}$$ or $$\beta < \frac{k\alpha}{1+(k-1)\alpha}$$. The proof goes by first proving the equivalence of the possibility of extraction for some $$(\alpha,\beta)$$ to the existence of a sequence of hypergraphs whose hyperedges are distributed somewhat evenly, where the exact quality of this evenness is parametrized by $$\alpha$$ and $$\beta$$. Then, to get the positive part (when extraction is possible) of their characterization, the authors use the probabilistic method to show the existence of the needed hypergraphs for the right choices of $$\alpha$$ and $$\beta$$. The authors point out that they do not know whether their extraction technique can be carried out in polynomial time, because it is unknown whether hypergraphs with the properties they require can be constructed in polynomial time. On the other hand, to get the negative part of their characterization, the authors give a lengthy combinatorial argument that shows that the wrong values of $$\alpha$$ and $$\beta$$ lead to contradictions in connection with the length $$f(n)$$ of the extraction source string $$\sigma$$, the amount of information that can be contained within this string, and information conservation. Next, the authors modify the setting of their first main result, by modifying the definition of $$\mathsf{EXT}(k)$$ in such a way as to allow the extraction algorithm to be partial on some bad advices. They call $$\mathsf{EXT}^p(k)$$ the set of pairs $$(\alpha,\beta)$$ resulting from this modification. Kolmogorov complexity is a function that is only algorithmically approximable from above and not computable; meaning that there is never certainty about whether a short description that was found for some string is already the shortest. Thus, one might think that allowing partial functions here might enable more risky extraction strategies that are allowed to fail, for instance, when they get stuck in an infinite loop searching for non-existent shorter descriptions for some strings. However, the authors show that allowing such strategies only makes a marginal difference when compared with the case where the extraction algorithm is required to always be total; namely, they show that $$\mathsf{EXT}^p(k)$$ equals $$\mathsf{EXT}(k)$$ plus some computable pairs on the border of $$\mathsf{EXT}(k)$$. In the last section of the article, the authors use a variant of their hypergraph argument to slightly improve another of Zimand's results [loc. cit.] in the setting where the advice is not of constant size anymore, but is allowed to grow computably in the input length. Rupert H??lzl (Neubiberg) Zbl 1214.03030; Zbl 1223.68060