Recent zbMATH articles in MSC 68Q25https://www.zbmath.org/atom/cc/68Q252021-04-16T16:22:00+00:00WerkzeugRényi entropies as a measure of the complexity of counting problems.https://www.zbmath.org/1456.827052021-04-16T16:22:00+00:00"Chamon, Claudio"https://www.zbmath.org/authors/?q=ai:chamon.claudio"Mucciolo, Eduardo R."https://www.zbmath.org/authors/?q=ai:mucciolo.eduardo-rHow fast can we reach a target vertex in stochastic temporal graphs?https://www.zbmath.org/1456.681242021-04-16T16:22:00+00:00"Akrida, Eleni C."https://www.zbmath.org/authors/?q=ai:akrida.eleni-c"Mertzios, George B."https://www.zbmath.org/authors/?q=ai:mertzios.george-b"Nikoletseas, Sotiris"https://www.zbmath.org/authors/?q=ai:nikoletseas.sotiris-e"Raptopoulos, Christoforos"https://www.zbmath.org/authors/?q=ai:raptopoulos.christoforos-l"Spirakis, Paul G."https://www.zbmath.org/authors/?q=ai:spirakis.paul-g"Zamaraev, Viktor"https://www.zbmath.org/authors/?q=ai:zamaraev.victor-aA temporal graph (or evolving graph) is a sequence of spanning subgraphs \((G_t : t \in {\mathbb N})\) of an underlying graph \(G\).
One subgraph in the sequence is a \textit{snapshot}.
A \textit{stochastic temporal graph} is a stochastic process on snapshots, forming a temporal graph on \(G\).
This paper considers processes in which the probability that each edge \(e\in E\) appears at time step \(t\) depends on its status in the previous \(k\) steps.
Edges behave independently of one another over time, each with its own probability distribution.
A detailed examination of the complexity of two temporal path problems, Minimum Arrival and Best Policy, is presented for this stochastic temporal graph model.
Reviewer: Charles J. Colbourn (Tempe)The power of linear programming for general-valued CSPs.https://www.zbmath.org/1456.680592021-04-16T16:22:00+00:00"Kolmogorov, Vladimir"https://www.zbmath.org/authors/?q=ai:kolmogorov.vladimir"Thapper, Johan"https://www.zbmath.org/authors/?q=ai:thapper.johan"Živný, Stanislav"https://www.zbmath.org/authors/?q=ai:zivny.stanislavThe slowdown theorem: a lower bound for computational irreducibility in physical systems.https://www.zbmath.org/1456.680572021-04-16T16:22:00+00:00"Gorard, Jonathan"https://www.zbmath.org/authors/?q=ai:gorard.jonathanSummary: Following from the work of Beggs and Tucker on the computational complexity of physical oracles, a simple diagonalization argument is presented to show that generic physical systems, consisting of a Turing machine and a deterministic physical oracle, permit computational irreducibility. To illustrate this general result, a specific analysis is provided for such a system (namely a scatter machine experiment (SME) in which a classical particle is scattered by a sharp wedge) and proves that it must be computationally irreducible. Finally, some philosophical implications of these results are discussed; in particular, it is shown that the slowdown theorem implies the existence of classical physics experiments with undecidable observables, as well as the existence of a definite lower bound for the computational irreducibility of the laws of physics. Therefore, it is argued that the hypothesis that ``the universe is a computer simulation'' has no predictive (i.e., only retrodictive) power.Geometrical organization of solutions to random linear Boolean equations.https://www.zbmath.org/1456.941462021-04-16T16:22:00+00:00"Mora, Thierry"https://www.zbmath.org/authors/?q=ai:mora.thierry"Mézard, Marc"https://www.zbmath.org/authors/?q=ai:mezard.marcA rigorous extension of the Schönhage-Strassen integer multiplication algorithm using complex interval arithmetic.https://www.zbmath.org/1456.682322021-04-16T16:22:00+00:00"Steinke, Thomas"https://www.zbmath.org/authors/?q=ai:steinke.thomas"Sainudiin, Raazesh"https://www.zbmath.org/authors/?q=ai:sainudiin.raazeshSummary: Multiplication of \(n\)-digit integers by long multiplication requires \(O(n^2)\) operations and can be time-consuming. In [Computing 7, 281--292 (1971; Zbl 0223.68007)] \textit{A. Schönhage} and \textit{V. Strassen} published an algorithm capable of performing the task with only \(O(n\log(n))\) arithmetic operations over the complex field \(\mathbb{C}\); naturally, finite-precision approximations to \(\mathbb{C}\) are used and rounding errors need to be accounted for. Overall, using variable-precision fixed-point numbers, this results in an \(O(n(\log(n))^{(2+\epsilon)})\)-time algorithm. However, to make this algorithm more efficient and practical we need to make use of hardware-based floating-point numbers. How do we deal with rounding errors? and how do we determine the limits of the fixed-precision hardware? Our solution is to use interval arithmetic to guarantee the correctness of results and determine the hardware's limits. We examine the feasibility of this approach and are able to report that 75,000-digit base-256 integers can be handled using double-precision containment sets. This clearly demonstrates that our approach has practical potential; however, at this stage, our implementation does not yet compete with commercial ones, but we are able to demonstrate the feasibility of this technique.
For the entire collection see [Zbl 1391.03010].A max-sum algorithm for training discrete neural networks.https://www.zbmath.org/1456.681392021-04-16T16:22:00+00:00"Baldassi, Carlo"https://www.zbmath.org/authors/?q=ai:baldassi.carlo"Braunstein, Alfredo"https://www.zbmath.org/authors/?q=ai:braunstein.alfredoNP-logic systems and model-equivalence reductions.https://www.zbmath.org/1456.680602021-04-16T16:22:00+00:00"Shen, Yuping"https://www.zbmath.org/authors/?q=ai:shen.yuping"Zhao, Xishun"https://www.zbmath.org/authors/?q=ai:zhao.xishunSummary: In this paper we investigate the existence of model-equivalence reduction between NP-logic systems which are logic systems with model existence problem in NP. It is shown that among all NP-systems with model checking problem in NP, the existentially quantified propositional logic \((\exists\text{PF})\) is maximal with respect to poly-time model-equivalent reduction. However, \(\exists\text{PF}\) seems not a maximal NP-system in general because there exits a NP-system with model checking problem \(\mathrm{D}^\mathrm{P}\)-complete.
For the entire collection see [Zbl 1391.03010].Making big steps in trajectories.https://www.zbmath.org/1456.650492021-04-16T16:22:00+00:00"Müller, Norbert Th."https://www.zbmath.org/authors/?q=ai:muller.norbert-th"Korovina, Margarita"https://www.zbmath.org/authors/?q=ai:korovina.margarita-vladimirovnaSummary: We consider the solution of initial value problems within the context of hybrid systems and emphasise the use of high precision approximations (in software for exact real arithmetic). We propose a novel algorithm for the computation of trajectories up to the area where discontinuous jumps appear, applicable for holomorphic flow functions. Examples with a prototypical implementation illustrate that the algorithm might provide results with higher precision than well-known ODE solvers at a similar computation time.
For the entire collection see [Zbl 1391.03010].Computing the solutions of the combined Korteweg-de Vries equation by Turing machines.https://www.zbmath.org/1456.351812021-04-16T16:22:00+00:00"Lu, Dianchen"https://www.zbmath.org/authors/?q=ai:lu.dianchen"Wang, Qingyan"https://www.zbmath.org/authors/?q=ai:wang.qingyan"Zheng, Rui"https://www.zbmath.org/authors/?q=ai:zheng.ruiSummary: In this paper, we study the computability of the initial value problem of the Combined Korteweg-de Vries (CKdV, for short) equation: \(u_t+uu_x+u^2u_x+u_{xxx}=0,u(x,0)=\phi(x)\). It is shown that, for any integer \(s\geqslant 3\), the nonlinear solution operator \(K_R : H^s(\mathbb{R})\rightarrow C(\mathbb{R},H^s(\mathbb{R}))\) which maps an initial condition data \(\phi\) to the solution of the Combined KdV equation can be computed by a Turing machine.
For the entire collection see [Zbl 1391.03010].Scheduling jobs with a V-shaped time-dependent processing time.https://www.zbmath.org/1456.900782021-04-16T16:22:00+00:00"Sedding, Helmut A."https://www.zbmath.org/authors/?q=ai:sedding.helmut-aSummary: In the field of time-dependent scheduling, a job's processing time is specified by a function of its start time. While monotonic processing time functions are well-known in the literature, this paper introduces non-monotonic functions with a convex, piecewise-linear V-shape similar to the absolute value function. They are minimum at an ideal start time, which is the same for all given jobs. Then, the processing time equals the job's basic processing time. Earlier or later, it increases linearly with slopes that can be asymmetric and job-specific. The objective is to sequence the given jobs on a single machine and minimize the makespan. This is motivated by production planning of moving car assembly lines, in particular, to sequence a worker's assembly operations such that the time-dependent walking times to gather materials from the line-side is minimized. This paper characterizes the problem's computational complexity in several angles. NP-hardness is observed even if the two slopes are the same for all jobs. A fully polynomial time approximation scheme is devised for the more generic case of agreeable ratios of basic processing time and slopes. In the most generic case with job-specific slopes, several polynomial cases are identified.Probabilistic logic over equations and domain restrictions.https://www.zbmath.org/1456.030452021-04-16T16:22:00+00:00"Mordido, Andreia"https://www.zbmath.org/authors/?q=ai:mordido.andreia"Caleiro, Carlos"https://www.zbmath.org/authors/?q=ai:caleiro.carlosSummary: We propose and study a probabilistic logic over an algebraic basis, including equations and domain restrictions. The logic combines aspects from classical logic and equational logic with an exogenous approach to quantitative probabilistic reasoning. We present a sound and weakly complete axiomatization for the logic, parameterized by an equational specification of the algebraic basis coupled with the intended domain restrictions. We show that the satisfiability problem for the logic is decidable, under the assumption that its algebraic basis is given by means of a convergent rewriting system, and, additionally, that the axiomatization of domain restrictions enjoys a suitable subterm property. For this purpose, we provide a polynomial reduction to Satisfiability Modulo Theories. As a consequence, we get that validity in the logic is also decidable. Furthermore, under the assumption that the rewriting system that defines the equational basis underlying the logic is also subterm convergent, we show that the resulting satisfiability problem is \textsf{NP}-complete, and thus the validity problem is \textsf{coNP}-complete. We test the logic with meaningful examples in information security, namely by verifying and estimating the probability of the existence of offline guessing attacks to cryptographic protocols.Reconfiguration over tree decompositions.https://www.zbmath.org/1456.681322021-04-16T16:22:00+00:00"Mouawad, Amer E."https://www.zbmath.org/authors/?q=ai:mouawad.amer-e"Nishimura, Naomi"https://www.zbmath.org/authors/?q=ai:nishimura.naomi"Raman, Venkatesh"https://www.zbmath.org/authors/?q=ai:raman.venkatesh"Wrochna, Marcin"https://www.zbmath.org/authors/?q=ai:wrochna.marcinSummary: A vertex-subset graph problem \(Q\) defines which subsets of the vertices of an input graph are feasible solutions. The reconfiguration version of a vertex-subset problem \(Q\) asks whether it is possible to transform one feasible solution for \(Q\) into another in at most \(\ell\) steps, where each step is a vertex addition or deletion, and each intermediate set is also a feasible solution for \(Q\) of size bounded by \(k\). Motivated by recent results establishing W[1]-hardness of the reconfiguration versions of most vertex-subset problems parameterized by \(\ell\), we investigate the complexity of such problems restricted to graphs of bounded treewidth. We show that the reconfiguration versions of most vertex-subset problems remain PSPACE-complete on graphs of treewidth at most \(t\) but are fixed-parameter tractable parameterized by \(\ell + t\) for all vertex-subset problems definable in monadic second-order logic (MSOL). To prove the latter result, we
introduce a technique which allows us to circumvent cardinality constraints and define reconfiguration problems in MSOL.
For the entire collection see [Zbl 1318.68014].A survey of membrane computing systems attacking hard computational problems.https://www.zbmath.org/1456.680462021-04-16T16:22:00+00:00"Song, Bosheng"https://www.zbmath.org/authors/?q=ai:song.bosheng"Lai, Leshan"https://www.zbmath.org/authors/?q=ai:lai.leshan"Li, Kenli"https://www.zbmath.org/authors/?q=ai:li.kenliSummary: In the area of membrane computing, a great attention is traditionally paid to how to demonstrate a theoretical possibility to solve hard computational problems in polynomial time by means of various P system models. However, an important fact is that some P system models with this capability are actually stronger: they are able to solve PSPACE-complete problems in polynomial time, while others have been recently shown to characterize the class \({\mathrm{P}}^{\#{\mathrm{P}}}\), which is conjectured to be strictly included in PSPACE. In this survey, we focus on some P system models which have the potential to solve hard computational problems, including P systems with active membranes, tissue P systems, P systems with proteins on membranes, P systems with symport/antiport and membrane division, as well as spiking neural P systems. We provide a survey of computational complexity results of these P system models, point out some features and provide them with their computational strength. Finally, some open problems related to the above mentioned P systems are presented.Ham-sandwich cuts and center transversals in subspaces.https://www.zbmath.org/1456.520102021-04-16T16:22:00+00:00"Schnider, Patrick"https://www.zbmath.org/authors/?q=ai:schnider.patrickSummary: The Ham-Sandwich theorem is a well-known result in geometry. It states that any \(d\) mass distributions in \(\mathbb{R}^d\) can be simultaneously bisected by a hyperplane. The result is tight, that is, there are examples of \(d + 1\) mass distributions that cannot be simultaneously bisected by a single hyperplane. In this paper we will study the following question: given a continuous assignment of mass distributions to certain subsets of \(\mathbb{R}^d\), is there a subset on which we can bisect more masses than what is guaranteed by the Ham-Sandwich theorem? We investigate two types of subsets. The first type are linear subspaces of \(\mathbb{R}^d\), i.e., \(k\)-dimensional flats containing the origin. We show that for any continuous assignment of \(d\) mass distributions to the \(k\)-dimensional linear subspaces of \(\mathbb{R}^d\), there is always a subspace on which we can simultaneously bisect the images of all \(d\) assignments. We extend this result to center transversals, a generalization of Ham-Sandwich cuts. As for Ham-Sandwich cuts, we further show that for \(d - k + 2\) masses, we can choose \(k-1\) of the vectors defining the \(k\)-dimensional subspace in which the solution lies. The second type of subsets we consider are subsets that are determined by families of \(n\) hyperplanes in \(\mathbb{R}^d\). Also in this case, we find a Ham-Sandwich-type result. In an attempt to solve a conjecture by Langerman about bisections with several cuts, we show that our underlying topological result can be used to prove this conjecture in a relaxed setting.Algebraic dependencies and \(\mathsf{PSPACE}\) algorithms in approximative complexity over any field.https://www.zbmath.org/1456.680582021-04-16T16:22:00+00:00"Guo, Zeyu"https://www.zbmath.org/authors/?q=ai:guo.zeyu"Saxena, Nitin"https://www.zbmath.org/authors/?q=ai:saxena.nitin"Sinhababu, Amit"https://www.zbmath.org/authors/?q=ai:sinhababu.amitSummary: Testing whether a set \(\mathbf{f}\) of polynomials has an algebraic dependence is a basic problem with several applications. The polynomials are given as algebraic circuits. The complexity of algebraic independence testing is wide open over finite fields
[\textit{Z. Dvir} et al., Comput. Complexity 18, No. 1, 1--58 (2009; Zbl 1210.68136)].
Previously, the best complexity bound known was \(\mathsf{NP}^{\#\mathsf{P}}\)
[\textit{J. Mittmann} et al., Trans. Am. Math. Soc. 366, No. 7, 3425--3450 (2014; Zbl 1350.13015)].
In this article we put the problem in \(\mathsf{AM}\cap \mathsf{coAM} \). In particular, dependence testing is unlikely to be \(\mathsf{NP} \)-hard. Our proof uses methods of algebraic geometry. We estimate the size of the image and the sizes of the preimages of the polynomial map \(\mathbf{f}\) over the finite field. A gap between the corresponding sizes for independent and for dependent sets of polynomials is utilized in the \(\mathsf{AM}\) protocols.
Next, we study the open question of testing whether every annihilator of \(\mathbf{f}\) has zero constant term
[\textit{N. Kayal}, ``The complexity of the annihilating polynomial'', in: Proceedings of the 24th annual IEEE conference on computational complexity, CCC'09. Los Alamitos, CA: IEEE Computer Society. 184--193 (2009; \url{doi:10.1109/CCC.2009.37})].
We introduce a new problem called approximate polynomial satisfiability (APS), which is equivalent to the preceding question by a classical characterization in terms of the Zariski closure of the image of \(\mathbf{f} \). We show that APS is \(\mathsf{NP} \)-hard and, using ideas from algebraic geometry, we put APS in \(\mathsf{PSPACE} \). (The best previous bound was \(\mathsf{EXPSPACE}\) via Gröbner basis computation.) As an unexpected application of this to approximative complexity theory we obtain that, over any field, hitting sets for \(\overline{\mathsf{VP}}\) can be constructed in \(\mathsf{PSPACE} \). This solves an open problem posed
in [\textit{K. D. Mulmuley}, J. Am. Math. Soc. 30, No. 1, 225--309 (2017; Zbl 1402.14078)],
greatly mitigating the GCT Chasm (exponentially in terms of space complexity).