# zbMATH — the first resource for mathematics

Computational complexity. A conceptual perspective. (English) Zbl 1154.68056
Cambridge: Cambridge University Press (ISBN 978-0-521-88473-0/hbk). xxiv, 606 p. (2008).
This is a quite complete and updated treatment of complexity theory. The conceptual approach, claimed in the subtitle, tends to explain the notions in a deep manner using current language discourse. As a textbook, some themes, expositions and exercises are suitable for undergraduate courses in programmes of mathematics or computer science, the more advanced topics and expositions are proper for graduates courses. For lecturers using the book as a basis for a course, the author gives continuously advice and warnings concerning the difficulties in the material. On the other hand, for the specialist, the volume is certainly a useful and valuable reference book.
The author emphasizes the “conceptual nature” of his exposition, in contrast to a mere “technical” approach, and this point of view has been shared by several relevant researchers [S. Aaronson, A. Borodin, B. Chazelle, O. Goldreich, S. Goldwasser, R. Karp, M. Kearns, C. Papadimitriou, M. Sudan and S. Vadhan, Statement on conceptual contributions in theory (addressed to STOC’08 Program Committee) (2008), http://www.wisdom.weizmann.ac.il/~oded/on-concept.html]. As the author claims, the conceptual exposition can be appreciated by non-experts, it is stated in a “high level” language, and the conceptual aspects have had impact outside the own area of specialty.
The first chapter is an introduction to the subject, two chapters present the relevant notions of the classes P and NP, and respective chapters discuss the time hierarchies, the space hierarchies, randomness and counting, one-way functions, pseudorandom generators, probabilistic proof systems and relaxation. There are seven appendices discussing complexity classes, lower complexity bounds for common problems, cryptography, probability, explicit constructions, omitted proofs and specific problems.
To reinforce the conceptual approach, a preface and a whole section in the introduction are devoted to describe the contents of the whole book. Later, each chapter has its own summary, a set of ending notes related to the development of the main notions, and a starting and illustrative epigraph. Thus, at every moment the author is guiding the learner, the instructor and the specialist into complexity theory. This may seem to be tedious, but for sure many lengthy discussions clarify finer aspects of the involved notions.
An important difference between the technical and conceptual approaches is, for instance, the exposition of the existence of NP-complete problems. It is rather usual to illustrate NP-completeness by showing Cook’s theorem: Boolean satisfiability is NP-complete; and after an ordinary course in complexity most students are convinced that it is a natural fact. Instead, the author prefers to show the existence of an NP-complete problem using a diagonalization procedure, and he emphasizes this existence as a surprising fact.
On the other hand, the book poses series of interesting, challenging and, above all, motivating, exercises. Indeed, such exercise series may serve as support for a “practical course” on complexity theory. I appreciate that the exercises tend to successfully balance the heavy conceptual disgressions. In the following synopsis I will summarize the technical results.
Initially, the common notions of search problems, decision problems and uniform and non-uniform models of computation (mainly Turing machines and Boolean circuits) are introduced. Rice’s theorem is shown (no non-trivial function class is decidable), and the unsolvability of the Halting Problem and Post’s Correspondence Problem. Universal Turing machines are discussed within Kolmogorov complexity: For a given universal machine $$U$$ and a bit-string $$y$$ let $$c_U(y)$$ be the length of the shortest description of any Turing machine computing $$y$$. Then $$c_U$$ cannot be effectively computable. The notion of circuit complexity is also introduced. An algorithm computing a map with advices is an effective function with two arguments: the first is properly an advice and the second the value at which the map should be evaluated. The class P$$|f$$ consists of problems solvable in polynomial-time using advices of length bounded by $$f$$. The class P$$|$$poly is the union of P$$|f$$ varying $$f$$ over all polynomials. It is shown that there are non-computable maps that are members of P$$|$$1.
When dealing with P and NP, initially there are defined the classes PF and PC consisting of polynomially bounded relations solvable and checkable in polynomial time respectively, while P and NP consist of sets decidable and checkable in polynomial time. NP can be realized as the class of projections of relations in PC (hence any Yes-instance point may have many solution points witnessing the Yes decision) and each problem in PC is Cook-reducible to a decision problem in NP. It is also shown that PC is contained in PF if and only if P = NP, as well as the existence of NP-complete problems, via diagonalization arguments. A list of NP-complete problems is dealt with and it is shown that under the assumption that P differs from NP there should be problems in NP which are not NP-complete. A “promise” in a search or decision problem is a subset of the problem’s domain and just for their points proper solutions are required. The “candid” problems are those whose promises are their own domains. Then it is proved that for any relation in NP there exist optimal algorithms for their candid versions. The class coNP consists of complements of relations in NP. The intersection of NP and coNP, contained in NP but containing P, is analyzed, e.g., if NP is Cook-reducible to a problem in the intersection of NP and coNP then NP = coNP. The Polynomial Hierarchy (PH) is exposed in a very readable way (in a clear way it is explained that NP would differ from P if PH differs from P) and some conditions are stated for the hierarchy to collapse, e.g., if NP is contained in P$$|$$poly then PH collapses to second order.
In the fourth and the fifth chapter, the time hierarchies, deterministic and non-deterministic, are developed, including Borodin’s Gap Theorem (for any computable $$g$$ there exists $$t$$ such that DTime($$t$$) = DTime($$g\circ t$$)) and the Speed-Up Theorem. The space hierarchies are also described. The classes L and NL are introduced as those consisting of problems that are, respectively, deterministically and non-deterministically solvable in polylog space. The undirected graph connectivity problem is shown to be in L, while directed graph connectivity is NL-complete under log-space reductions. The class L is compared with P (L will differ from P if a P-complete problem is not in L) and it is shown that NL is contained in P. The deterministic and non-deterministic space hierarchies are compared, and finally it is proved that the satisfiability problem for quantified Boolean formulae is PSpace-complete under polynomial-time reductions.
The sixth chapter deals with randomness and counting. Probabilistic machines are considered off-line (there is a source of input random seeds) and on-line (the inner transitions have associated probabilities); and the machines may have several error-types: two-sided (in which both Yes-instances and No-instances have associated error probabilities), one-sided (in which Yes-instances are decided correctly while for No-instances there is an error probability), and zero-sided (in which for some instances there is no stated decision, while for other instances the given decisions are always correct). BPP consists of problems solvable probabilistically in polynomial time with two-sided errors. It contains P and is contained in the existential second-order level of PH, in PSpace and in P$$|$$poly. It is unknown whether it is contained in NP or in P. Primality testing is shown as a member of BPP (although the author reminds us that it is in P indeed). The class RP consists of problems solvable probabilistically in polynomial time with one-sided errors, thus it is contained in the intersection of BPP and NP. Polynomial identity is shown to be a member of RP. The class ZPP is the intersection of RP and coRP, and it can be realized as the problems solvable probabilistically in expected polynomial time. Similarly there are defined the classes BPL and RL considering log-space and polynomial-time algorithms. The undirected graph connectivity problem is shown to be in RL indeed. Counting is treated extensively. #P consists of the functions counting solutions for problems in P: $$f$$ is in #P if and only if for some relation $$R$$ in PC, for each $$x$$, $$f(x)$$ counts the number of $$y$$s such that $$(x,y)\in R$$. In this case, it is defined $$\#R=f$$. The counting perfect matchings problem is shown to be in #P. The notion of parsimonious reduction is introduced (basically it is a reduction that preserves the number of solutions), and it is shown that a problem $$\#R$$ is #P-complete provided that any search problem in PC has a parsimonious reduction to $$R\in\text{PC}$$. It is also shown that there are tractable problems $$R$$ for which $$\#R$$ is #P-complete, in particular SAT for disjunctive forms is trivial, although counting the number of satisfying assignments is difficult: #DNF is #P-complete. For a function $$f$$, an $$(r,s)$$-approximator is a probabilistic algorithm $$\Pi$$ such that, at each point $$x$$, the probability that the relative error among $$\Pi(x)$$ and $$f(x)$$ is greater that $$r$$, evaluated at the length of $$x$$, is upper bounded by $$s$$, evaluated at the length of $$x$$; and it is said that $$f$$ is $$(1-r)$$-approximable if it has an $$(r,1/3)$$-approximation. Then it is seen that #DNF is $$(1-P^{-1})$$-approximable, for each polynomial $$P$$. Indeed, in a more general setting it is shown that for any $$R$$ in PC and any polynomial $$P$$ there is an oracle Turing machine that, having query access to NP, is a $$(P^{-1},\mu)$$-approximation to $$\#R$$, where $$\mu$$ is a negligible map. The problem of uniform generation (for a relation $$R$$ in PC and a point $$x$$ in its domain, choose uniformly a solution in $$R(x)$$) and its probabilistic solvability is compared with the problem to approximate its counting version, and they are shown to be computationally equivalent. Finally, several procedures for uniform generation are explained in detail.
The seventh chapter treats the benefits of hard problems. The exposition is closely connected with [O. Goldreich, Modern cryptography, probabilistic proofs and pseudo-randomness. Algorithms and Combinatorics. 17. Berlin: Springer (1999; Zbl 0907.94002)] and the crypto appendix of the current book. The main benefit of hard problems is the possibility to build one-way functions, which are the main components in cryptographic protocols. Given a relation $$R$$ in PC, a generator of solved intractable instances is a map $$G:n\mapsto (x,y)\in R$$ such that $$n$$ is the length of $$x$$, and for any probabilistic polynomial-time algorithm $$B$$, $$n\mapsto\text{Prob}[G(n)=(x,y) \text{ \& } (x,B(x,n))\in R]$$ is negligible. A map $$f$$ is one-way if it is computable but its inverse function is intractable, i.e. for any probabilistic polynomial-time algorithm $$B$$, the map $$n\mapsto\text{Prob}[B(f(x),n)\in f^{-1}(f(x)) \mid \text{length}(x)=n]$$ is negligible. The existence of one-way maps is equivalent to the existence of relations with generators of solved intractable instances. Some weaker notions of one-way maps are introduced and they are proved to entail the initial notion. A hard-core predicate $$b$$ for a function $$f$$ is a 0-1 valued map such that by knowing the value $$f(x)$$ the probability to guess the bit $$b(x)$$ is arbitrarily close to 1/2 (for any probabilistic polynomial-time $$A$$, $$n\mapsto|\text{Prob}[A(f(x))=b(x)\mid \text{length}(x)=n]-1/2|$$ is negligible). The generic hard-core theorem states that the inner product, modulo 2, is a hard core for the map $$(x,y)\mapsto (g(x),y)$$ provided that $$g$$ is one-way, and it implies a coding procedure generalizing Hamming codes through probabilistic procedures.
The eighth chapter deals with the notion of pseudorandom generators. For a given stretch map $$\ell:\mathbb N\to\mathbb N$$ a pseudorandom generator $$G:(0+1)^*\to(0+1)^*$$ is such that the image of $$(0+1)^n$$ is a subset of $$(0+1)^{\ell(n)}$$ and, for a given family of distinguisher polynomial-time algorithms and a given family of threshold functions, the probabilities that a distinguisher accepts both $$G(U_n)$$ and $$U_{\ell(n)}$$ coincide up to a threshold function ($$U_k$$ is a random variable in $$(0+1)^k$$ uniformly distributed). In the current context, distinguishers are probabilistic polynomial-time algorithms, and threshold maps are the reciprocals of polynomials. This notion of randomness is different from those due to Kolmogorov and Chaitin. Indeed, these notions (a sequence is random if its shortest descriptions have essentially the same length as the sequence itself) are ontological, whilst the randomness notion in the current book is of a behavioral kind. Pseudorandom generators may be used to provide random seeds for probabilistic machines. The notions of statistical closeness and computational indistinguishability are discussed and they involve probability ensembles. The existence theorem is proved: If there exists a polynomial-time one-to-one length-preserving and one-way map then for any stretch map there is a pseudorandom generator. This entails that the existence of pseudorandom generators is equivalent to the existence of one-way maps. Several forms of derandomization of BPP are stated, among them, if there exists a non-uniform strong pseudorandom generator then BPP is contained in the meet of the classes DTime($$2^{n^k}$$), with $$k\in\mathbb N$$, and the Nisan-Wigderson derandomization procedure. The class SC consists of the problems solvable by deterministic polynomial time and logarithmic space. Then it is proved that BL is contained in SC. The concept of random walks is also used as a way to build pseudorandom generators.
The ninth chapter is devoted to probabilistic proof systems. An interactive proof is modeled as a game between two parties: a prover and a verifier. Each party plays its turn based on a common input, a random seed and the previous plays of its opponent. Given a problem set $$S$$ and an instance point $$x$$, the goal of the prover is to convince the verifier that $$x\in S$$. The verifier has access to probabilistic polynomial-time resources while there are no limits for the prover. The interactive proof shall be complete (if $$x\in S$$ then the prover shall have a winning strategy) and sound (if $$x\not\in S$$ then the verifier shall reject with probability at least 1/2, independently of the prover’s strategy). For instance, the non-isomorphic graphs problem admits a probabilistic proof system: Given two graphs, the verifier chooses randomly a graph, renumbers its vertices and asks the prover to point out its isomorphic graph; if the prover’s reply coincides with the chosen graph, then the verifier claims that the two initial graphs are non-isomorphic. It is shown that any set with an interactive proof system with zero soundness error is in NP. The class IP consists of the problems with probabilistic proof systems with polynomial length. It is shown that IP = PSpace following a series of steps including the fact that coNP is within IP (an interactive system to recognize unsatisfiable Boolean formulas, which is coNP-complete, is extended to solve QBF, which is PSpace-complete), and that any interactive proof system entails a polynomial-space optimal strategy for the prover (a strategy that maximizes the acceptation probability of the verifier).
The Arthur-Merlin (AM) games are probabilistic proof systems in which there is a public coin-tossing system providing random seeds for both players. IP($$f$$) is the subclass of IP in which the number of rounds in the game is bounded by $$f$$. There are proven speed-up and simulation results: $$\text{IP}(O(f)) = \text{IP}(f)$$, IP$$(f)$$ is contained in AM$$(f+3)$$, and AM$$(2f)$$ is contained in AM$$(f+1)$$. The author remarks that if coNP is contained in IP$$(2)$$ then the polynomial hierarchy will collapse. Zero knowledge is also treated in this chapter. A prover’s strategy is of zero knowledge if the whole interaction with any verifier’s strategy can be simulated by a probabilistic polynomial-time algorithm (in this way, the prover is not providing any further knowledge). For instance, the Isomorphic Graphs Problem admits a zero-knowledge probabilistic proof system: Given two graphs, the prover renumbers the vertices of the second graph and sends this new graph $$H$$ to the verifier. The verifier chooses an index, say $$i$$, and asks the prover to give an isomorphism $$G_i\to H$$; if the graphs are indeed isomorphic, the prover can build the isomorphism, namely, if $$i=2$$, it is the initial renumbering, otherwise it is the composition of the isomorphism among the graphs and the numbering, so he is able to send a map $$\psi$$; the verifier accepts according to his verification that $$\psi$$ is an isomorphism. In this protocol, the verifier is unable to know that the initial graph chosen by the prover was the second one, for instance.
The Three-Coloring Graph Problem is also solvable with a zero-knowledge probabilistic proof system. Indeed, if there is a non-uniform hard one-way map, then any problem in NP admits a zero-knowledge probabilistic proof system (hence the importance of these systems in cryptographic protocols). A probabilistically checkable proof system (PCP) for a set $$S$$ is an oracle probabilistic polynomial-time machine, a verifier, such that if $$x\in S$$ then there is an oracle $$\pi_x$$ such that $$V$$ accepts $$x$$ by querying $$\pi_x$$, and if $$x\not\in S$$ then $$V$$ rejects $$x$$ with a probability at least 1/2 independently of the queried oracle. The complexity of a PCP can be measured with respect to the number of oracle queries $$q(n)$$ and with respect to the number of random transitions $$r(n)$$ ($$n$$ is the length of the input $$x$$) and any two such measures characterize a corresponding class PCP($$r,q$$). It is shown that for any polynomial map $$r$$, PCP($$r$$, poly) is contained in NTime($$2^r$$poly), and consequently PCP(log, poly) is contained in NP. This in turn implies the important PCP Theorem: NP = PCP(log, $$O(1)$$). The PCP Theorem has immediate consequences with respect to approximability. Finally, some estimations are made about the length of the witnessing solutions for problems in NP, for instance, for any problem in NP there exists a logarithmic map $$\ell$$ such that for any map $$k$$ with $$0\leq k\leq \ell$$ the set $$S$$ is in PCP$$(\ell-k, O(2^k))$$.
The tenth chapter treats some relaxation and approximation procedures. For any binary relation $$R$$ and any real-valued map $$f$$, let $$T_f:x\mapsto\max\{f(x,y)\mid (x,y)\in R\}$$, and $$t_f:x\mapsto\min\{f(x,y)\mid (x,y)\in R\}$$. Given a “factor” map $$g$$, corresponding relaxations for maximization $$\{(x,y)\mid f(x,y)\geq T_{f}(x)/g(x)\}$$ and minimization $$\{(x,y)\mid f(x,y)\leq t_{f}(x)\cdot g(x)\}$$ are considered. For instance, the minimization problem with a factor 2 for Minimum Vertex Cover is an easy problem (the required approximations correspond to maximal matchings and these can be obtained by a greedy algorithm), and also for the Traveling Salesman Problem some approximations are easy. As a second form of approximation “gap problems” are introduced: for two maps $$g_1\leq g_2$$, it is necessary to distinguish instances such that $$T_f(x)<g_1(x)$$ and such that $$T_f(x)\geq g_2(x)$$. Some of these problems, related to Clique, SAT, and systems of linear equations, are proved to be NP-hard. As another approximation, there is considered $$\Gamma_u(S)$$ consisting of those points whose Hamming distance to any point in $$S$$ (of the same length) is at least $$u$$ times their own lengths, and it is required to distinguish, for a fixed $$u$$, the sets $$S$$ and $$\Gamma_u(S)$$. It is shown that there are probabilistic polynomial-time algorithms to make such distinction for some problems related to graph properties invariant under isomorphisms. The average cases for algorithms is treated using probability ensembles. This allows the author to define distributional problems, namely the class distNP consists of pairs $$(S,X)$$ where $$S$$ is a decision problem and $$X$$ is a simple probability ensemble. The existence of distNP-complete problems is shown. The class tcpP consists of problems $$(S,X)$$ that are typically solvable (there is an algorithm such that the probability of either making an error or exceeding a polynomial-time bound in computing is negligible). A probabilistic extension tpcBPP is also introduced and it is shown that tpcBPP contains distNP if and only if tpcBPP contains tcpP.
The appendices succinctly develop the mathematical themes required in the text but being outside the scope of complexity theory.
The book is quite extensive but certainly any reader may select the material, according to the suggestions formulated by the author, in order to gain the best profit from it, either as a textbook or as a reference handbook.

##### MSC:
 68Q15 Complexity classes (hierarchies, relations among complexity classes, etc.) 68Q17 Computational difficulty of problems (lower bounds, completeness, difficulty of approximation, etc.) 68Q25 Analysis of algorithms and problem complexity 68W40 Analysis of algorithms 68-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to computer science 68-02 Research exposition (monographs, survey articles) pertaining to computer science 94A60 Cryptography 94A62 Authentication, digital signatures and secret sharing
Full Text: