Recent zbMATH articles in MSC 68https://www.zbmath.org/atom/cc/682021-02-27T13:50:00+00:00WerkzeugParallel computation of alpha complexes for biomolecules.https://www.zbmath.org/1453.921052021-02-27T13:50:00+00:00"Masood, Talha Bin"https://www.zbmath.org/authors/?q=ai:masood.talha-bin"Ray, Tathagata"https://www.zbmath.org/authors/?q=ai:ray.tathagata"Natarajan, Vijay"https://www.zbmath.org/authors/?q=ai:natarajan.vijaySummary: The alpha complex, a subset of the Delaunay triangulation, has been extensively used as the underlying representation for biomolecular structures. We propose a GPU-based parallel algorithm for the computation of the alpha complex, which exploits the knowledge of typical spatial distribution and sizes of atoms in a biomolecule. Unlike existing methods, this algorithm does not require prior construction of the Delaunay triangulation. The algorithm computes the alpha complex in two stages. The first stage proceeds in a bottom-up fashion and computes a superset of the edges, triangles, and tetrahedra belonging to the alpha complex. The false positives from this estimation stage are removed in a subsequent pruning stage to obtain the correct alpha complex. Computational experiments on several biomolecules demonstrate the superior performance of the algorithm, up to a factor of 50 when compared to existing methods that are optimized for biomolecules.Algorithms for Radon partitions with tolerance.https://www.zbmath.org/1453.682002021-02-27T13:50:00+00:00"Bereg, Sergey"https://www.zbmath.org/authors/?q=ai:bereg.sergey-n"Haghpanah, Mohammadreza"https://www.zbmath.org/authors/?q=ai:haghpanah.mohammadrezaSummary: Let \(P\) be a set \(n\) points in a \(d\)-dimensional space. Tverberg theorem says that, if \(n\) is at least \((k-1)(d+1)\), then \(P\) can be partitioned into \(k\) sets whose convex hulls intersect. Partitions with this property are called Tverberg partitions. A partition has tolerance \(t\) if the partition remains a Tverberg partition after removal of any set of \(t\) points from \(P\). A tolerant Tverberg partition exists in any dimensions provided that \(n\) is sufficiently large. Let \(N(d, k, t)\) be the smallest value of \(n\) such that tolerant Tverberg partitions exist for any set of \(n\) points in \(\mathbb{R}^d\). Only few exact values of \(N(d, k, t)\) are known.
In this paper, we study the problem of finding Radon partitions (Tverberg partitions for \(k=2)\) for a given set of points. We develop several algorithms and found new lower bounds for \(N(d, 2, t)\).
For the entire collection see [Zbl 1435.68020].New algorithms and bounds for halving pseudolines.https://www.zbmath.org/1453.681992021-02-27T13:50:00+00:00"Bereg, Sergey"https://www.zbmath.org/authors/?q=ai:bereg.sergey-n"Haghpanah, Mohammadreza"https://www.zbmath.org/authors/?q=ai:haghpanah.mohammadrezaSummary: Let \(P\) be a set of points in general position in the plane. A halving line of \(P\) is a line passing through two points of \(P\) and cutting the remaining \(n-2\) points in a half (almost half if \(n\) is odd). Generalized configurations of points and their representations using allowable sequences are useful for bounding the number of halving lines.
We study a problem of finding generalized configurations of points maximizing the number of halving pseudolines. We develop algorithms for optimizing generalized configurations of points using the new notion of partial allowable sequence and the problem of computing a partial allowable sequence maximizing the number of \(k\)-transpositions. It can be viewed as a sorting problem using transpositions of adjacent elements and maximizing the number of transpositions at position \(k\).
We show that this problem can be solved in \(O(nk^n)\) time for any \(k>2\), and in \(O(n^k)\) time for \(k=1, 2\). We develop an approach for optimizing allowable sequences. Using this approach, we find new bounds for halving pseudolines for even \(n\), \(n\le 100\).
For the entire collection see [Zbl 1435.68020].Planar projections of graphs.https://www.zbmath.org/1453.681212021-02-27T13:50:00+00:00"Aravind, N. R."https://www.zbmath.org/authors/?q=ai:aravind.n-r"Maniyar, Udit"https://www.zbmath.org/authors/?q=ai:maniyar.uditSummary: We introduce and study a new graph representation where vertices are embedded in three or more dimensions, and in which the edges are drawn on the projections onto the axis-parallel planes. We show that the complete graph on \(n\) vertices has a representation in \(\lceil\sqrt{n/2}+1\rceil\) planes. In 3 dimensions, we show that there exist graphs with \(6n-15\) edges that can be projected onto two orthogonal planes, and that this is best possible. Finally, we obtain bounds in terms of parameters such as geometric thickness and linear arboricity. Using such a bound, we show that every graph of maximum degree 5 has a plane-projectable representation in 3 dimensions.
For the entire collection see [Zbl 1435.68020].Minimum conflict free colouring parameterized by treewidth.https://www.zbmath.org/1453.680942021-02-27T13:50:00+00:00"Ashok, Pradeesha"https://www.zbmath.org/authors/?q=ai:ashok.pradeesha"Bhargava, Rathin"https://www.zbmath.org/authors/?q=ai:bhargava.rathin"Gupta, Naman"https://www.zbmath.org/authors/?q=ai:gupta.naman"Khalid, Mohammad"https://www.zbmath.org/authors/?q=ai:khalid.mohammad"Yadav, Dolly"https://www.zbmath.org/authors/?q=ai:yadav.dollySummary: Conflict free \(q\)-Colouring of a graph \(G\) refers to the colouring of a subset of vertices of \(G\) using \(q\) colours such that every vertex has a neighbour of unique colour. In this paper, we study the \textsc{Minimum Conflict free q-Colouring} problem. Given a graph \(G\) and a fixed constant \(q\), \textsc{Minimum Conflict free q-Colouring} is to find a Conflict free \(q\)-Colouring of \(G\) that minimises the number of coloured vertices. We study the \textsc{Minimum Conflict free q-Colouring} problem parameterized by the treewidth of \(G\). We give an FPT algorithm for this problem and also prove running time lower bounds under Exponential Time Hypothesis (ETH) and Strong Exponential Time Hypothesis (SETH).
For the entire collection see [Zbl 1435.68020].On the parameterized complexity of spanning trees with small vertex covers.https://www.zbmath.org/1453.680962021-02-27T13:50:00+00:00"Kaur, Chamanvir"https://www.zbmath.org/authors/?q=ai:kaur.chamanvir"Misra, Neeldhara"https://www.zbmath.org/authors/?q=ai:misra.neeldharaSummary: We consider the minimum power spanning tree (MPST) problem with general and unit demands from a parameterized perspective. The case of unit demands is equivalent to the problem of finding a spanning tree with the smallest possible vertex cover (MCST). We show that MPST is W[1]-hard when parameterized by the vertex cover of the input graph, and is W[2]-hard when parameterized by the solution size -- the latter holds even in the case of unit demands. For the special case of unit demands, however, we demonstrate an FPT algorithm when parameterized by treewidth. In the context of kernelization, we show that even MCST is unlikely to admit a polynomial kernel under standard complexity-theoretic assumptions when parameterized by the vertex cover of the input graph.
For the entire collection see [Zbl 1435.68020].Parameterized algorithms for directed modular width.https://www.zbmath.org/1453.680972021-02-27T13:50:00+00:00"Steiner, Raphael"https://www.zbmath.org/authors/?q=ai:steiner.raphael"Wiederrecht, Sebastian"https://www.zbmath.org/authors/?q=ai:wiederrecht.sebastianSummary: Many well-known \(\mathsf{NP}\)-hard algorithmic problems on directed graphs resist efficient parameterizations with most known width measures for directed graphs, such as directed treewidth, DAG-width, Kelly-width and many others. While these focus on measuring how close a digraph is to an oriented tree resp. a directed acyclic graph, in this paper, we investigate directed modular width as a parameter, which is closer to the concept of clique-width. We investigate applications of modular decompositions of directed graphs to a wide range of algorithmic problems and derive FPT algorithms for several well-known digraph-specific \(\mathsf{NP}\)-hard problems, namely minimum (weight) directed feedback vertex set, minimum (weight) directed dominating set, digraph colouring, directed Hamiltonian path/cycle, partitioning into paths, (capacitated) vertex-disjoint directed paths, and the directed subgraph homeomorphism problem. The latter yields a polynomial-time algorithm for detecting topological minors in digraphs of bounded directed modular width. Finally we illustrate that other structural digraph parameters, such as directed pathwidth and cycle-rank can be computed efficiently using directed modular width as a parameter.
For the entire collection see [Zbl 1435.68020].Overlaying a hypergraph with a graph with bounded maximum degree.https://www.zbmath.org/1453.681342021-02-27T13:50:00+00:00"Havet, Frédéric"https://www.zbmath.org/authors/?q=ai:havet.frederic"Mazauric, Dorian"https://www.zbmath.org/authors/?q=ai:mazauric.dorian"Nguyen, Viet-Ha"https://www.zbmath.org/authors/?q=ai:nguyen.viet-ha"Watrigant, Rémi"https://www.zbmath.org/authors/?q=ai:watrigant.remiSummary: Let \(G\) and \(H\) be respectively a graph and a hypergraph defined on a same set of vertices, and let \(F\) be a fixed graph. We say that \(G\) \(F\)-overlays a hyperedge \(S\) of \(H\) if \(F\) is a spanning subgraph of the subgraph of \(G\) induced by \(S\), and that it \(F\)-overlays \(H\) if it \(F\)-overlays every hyperedge of \(H\). Motivated by structural biology, we study the computational complexity of two problems. The first problem, \((\varDelta\le k)F\)-\textsc{Overlay}, consists in deciding whether there is a graph with maximum degree at most \(k\) that \(F\)-overlays a given hypergraph \(H\). It is a particular case of the second problem \textsc{Max} \((\varDelta\le k)F\)-\textsc{Overlay}, which takes a hypergraph \(H\) and an integer \(s\) as input, and consists in deciding whether there is a graph with maximum degree at most \(k\) that \(F\)-overlays at least \(s\) hyperedges of \(H\).
We give a complete polynomial/\(\mathcal{NP}\)-complete dichotomy for the \textsc{Max} \((\varDelta\le k)\)-\(F\)-\textsc{Overlay} problems depending on the pairs \((F, k)\), and establish the complexity of \((\varDelta\le k)F\)-Overlay for many pairs \((F, k)\).
For the entire collection see [Zbl 1435.68020].An algorithm for strong stability in the student-project allocation problem with ties.https://www.zbmath.org/1453.910732021-02-27T13:50:00+00:00"Olaosebikan, Sofiat"https://www.zbmath.org/authors/?q=ai:olaosebikan.sofiat"Manlove, David"https://www.zbmath.org/authors/?q=ai:manlove.david-fSummary: We study a variant of the Student-Project Allocation problem with lecturer preferences over Students where ties are allowed in the preference lists of students and lecturers (\textsc{spa-st}). We investigate the concept of strong stability in this context. Informally, a matching is strongly stable if there is no student and lecturer \(l\) such that if they decide to form a private arrangement outside of the matching via one of \(l\)'s proposed projects, then neither party would be worse off and at least one of them would strictly improve. We describe the first polynomial-time algorithm to find a strongly stable matching or report that no such matching exists, given an instance of \textsc{spa-st}. Our algorithm runs in \(O(m^2)\) time, where \(m\) is the total length of the students' preference lists.
For the entire collection see [Zbl 1435.68020].A new model in firefighting theory.https://www.zbmath.org/1453.682072021-02-27T13:50:00+00:00"Klein, Rolf"https://www.zbmath.org/authors/?q=ai:klein.rolf-dieter"Kübel, David"https://www.zbmath.org/authors/?q=ai:kubel.david"Langetepe, Elmar"https://www.zbmath.org/authors/?q=ai:langetepe.elmar"Sack, Jörg-Rüdiger"https://www.zbmath.org/authors/?q=ai:sack.jorg-rudiger"Schwarzwald, Barbara"https://www.zbmath.org/authors/?q=ai:schwarzwald.barbaraSummary: Continuous and discrete models
[\textit{A. Bressan}, J. Differ. Equations 243, No. 2, 179--207 (2007; Zbl 1138.34002); \textit{F. V. Fomin} et al., Theor. Comput. Sci. 613, 38--50 (2016; Zbl 1333.05290)]
for firefighting problems are well-studied in theoretical computer science. We introduce a new, discrete, and more general framework based on a hexagonal cell graph to study firefighting problems in varied terrains. We present three different firefighting problems in the context of this model; for two of which, we provide efficient polynomial time algorithms and for the third, we show NP-completeness. We also discuss possible extensions of the model and their implications on the computational complexity.
For the entire collection see [Zbl 1435.68020].Greedy universal cycle constructions for weak orders.https://www.zbmath.org/1453.682292021-02-27T13:50:00+00:00"Jacques, Marsden"https://www.zbmath.org/authors/?q=ai:jacques.marsden"Wong, Dennis"https://www.zbmath.org/authors/?q=ai:wong.dennis-chi-him|wong.dennis-pak-shingSummary: A weak order is a way to rank \(n\) objects where ties are allowed. In this paper, we extend the prefer-larger and the prefer-opposite algorithms for de Bruijn sequences to provide the first known greedy universal cycle constructions for weak orders.
For the entire collection see [Zbl 1435.68020].Arbitrary pattern formation by opaque fat robots with lights.https://www.zbmath.org/1453.681922021-02-27T13:50:00+00:00"Bose, Kaustav"https://www.zbmath.org/authors/?q=ai:bose.kaustav"Adhikary, Ranendu"https://www.zbmath.org/authors/?q=ai:adhikary.ranendu"Kundu, Manash Kumar"https://www.zbmath.org/authors/?q=ai:kundu.manash-kumar"Sau, Buddhadeb"https://www.zbmath.org/authors/?q=ai:sau.buddhadebSummary: \textsc{Arbitrary Pattern Formation} is a widely studied problem in autonomous robot systems. The problem asks to design a distributed algorithm that moves a team of autonomous, anonymous and identical mobile robots to form any arbitrary pattern given as input. The majority of the existing literature investigates this problem for robots with unobstructed visibility. In a few recent works, the problem has been studied in the obstructed visibility model, where the view of a robot can be obstructed by the presence of other robots. However, in these works, the robots have been modelled as dimensionless points in the plane. In this paper, we have considered the problem in the more realistic setting where the robots have a physical extent. In particular, the robots are modelled as opaque disks. Furthermore, the robots operate under a fully asynchronous scheduler. They do not have access to any global coordinate system, but agree on the direction and orientation of one coordinate axis. Each robot is equipped with an externally visible light which can assume a constant number of predefined colors. In this setting, we have given a complete characterization of initial configurations from where any arbitrary pattern can be formed by a deterministic distributed algorithm.
For the entire collection see [Zbl 1435.68020].0-1 timed matching in bipartite temporal graphs.https://www.zbmath.org/1453.681412021-02-27T13:50:00+00:00"Mandal, Subhrangsu"https://www.zbmath.org/authors/?q=ai:mandal.subhrangsu"Gupta, Arobinda"https://www.zbmath.org/authors/?q=ai:gupta.arobindaSummary: Temporal graphs are introduced to model dynamic networks where the set of edges and/or nodes can change with time. In this paper, we define 0-1 timed matching for temporal graphs, and address the problem of finding the maximum 0-1 timed matching for bipartite temporal graphs. We show that the problem is NP-complete for bipartite temporal graphs even when each edge is associated with exactly one time interval. We also show that the problem is NP-complete for rooted temporal trees even when each edge is associated with at most three time intervals. Finally, we propose an \(O(n^3)\) time algorithm for the problem on a rooted temporal tree with \(n\) nodes when each edge is associated with exactly one time interval.
For the entire collection see [Zbl 1435.68020].Gathering over meeting nodes in infinite grid.https://www.zbmath.org/1453.681912021-02-27T13:50:00+00:00"Bhagat, Subhash"https://www.zbmath.org/authors/?q=ai:bhagat.subhash"Chakraborty, Abhinav"https://www.zbmath.org/authors/?q=ai:chakraborty.abhinav"Das, Bibhuti"https://www.zbmath.org/authors/?q=ai:das.bibhuti"Mukhopadhyaya, Krishnendu"https://www.zbmath.org/authors/?q=ai:mukhopadhyaya.krishnenduSummary: The gathering on meeting points problem requires the robots to gather at one of the pre-defined meeting points. This paper investigates a discrete version of the problem where the robots and meeting nodes are deployed on the nodes of an anonymous infinite square grid. The robots are identical, autonomous, anonymous, and oblivious. They operate under an asynchronous scheduler. Robots do not have any agreement on a global coordinate system. Initial configurations, for which the problem is unsolvable, have been characterized. A deterministic distributed algorithm has been proposed to solve the problem, for the rest of the configurations.
For the entire collection see [Zbl 1435.68020].Oriented diameter of star graphs.https://www.zbmath.org/1453.681172021-02-27T13:50:00+00:00"Ajish Kumar, K. S."https://www.zbmath.org/authors/?q=ai:ajish-kumar.k-s"Rajendraprasad, Deepak"https://www.zbmath.org/authors/?q=ai:rajendraprasad.deepak"Sudeep, K. S."https://www.zbmath.org/authors/?q=ai:sudeep.k-sSummary: An orientation of an undirected graph \(G\) is an assignment of exactly one direction to each edge of \(G\). Converting two-way traffic networks to one-way traffic networks and bidirectional communication networks to unidirectional communication networks are practical instances of graph orientations. In these contexts minimising the diameter of the resulting oriented graph is of prime interest.
The \(n\)-star network topology was proposed as an alternative to the hypercube network topology for multiprocessor systems by Akers and Krishnamurthy [IEEE Trans. on Computers (1989)].
The \(n\)-star graph \(S_n\) consists of \(n!\) vertices, each labelled with a distinct permutation of \([n]\). Two vertices are adjacent if their labels differ exactly in the first and one other position. \(S_n\) is an \((n-1)\)-regular, vertex-transitive graph with diameter \(\lfloor 3(n-1)/2\rfloor\). Orientations of \(S_n\), called unidirectional star graphs and distributed routing protocols over them were studied by
\textit{K. Day} and \textit{A. Tripathi} [Inf. Process. Lett. 45, No. 3, 123--129 (1993; Zbl 0768.68135)]
and \textit{S. Fujita} [``On oriented diameter of star graphs'', in: Proceedings of the first international symposium on computing and networking, CANDAR'13. Los Alamitos, CA: IEEE Computer Society. 48--56 (2013; \url{doi:10.1109/CANDAR.2013.16})].
Fujita showed that the (directed) diameter of this unidirectional star graph \(\overrightarrow{S_n}\) is at most \(\lceil 5n/2\rceil+2\).
In this paper, we propose a new distributed routing algorithm for the same \(\overrightarrow{S_n}\) analysed by Fujita, which routes a packet from any node \(s\) to any node \(t\) at an undirected distance \(d\) from \(s\) using at most \(\min \{4d+4, 2n+4\}\) hops. This shows that the (directed) diameter of \(\overrightarrow{S_n}\) is at most \(2n+4\). We also show that the diameter of \(\overrightarrow{S_n}\) is at least \(2n\) when \(n \ge 7\), thereby showing that our upper bound is tight up to an additive factor.
For the entire collection see [Zbl 1435.68020].Waiting for trains: complexity results.https://www.zbmath.org/1453.680932021-02-27T13:50:00+00:00"Tauer, Bjoern"https://www.zbmath.org/authors/?q=ai:tauer.bjoern"Fischer, Dennis"https://www.zbmath.org/authors/?q=ai:fischer.dennis"Fuchs, Janosch"https://www.zbmath.org/authors/?q=ai:fuchs.janosch"Koch, Laura Vargas"https://www.zbmath.org/authors/?q=ai:vargas-koch.laura"Zieger, Stephan"https://www.zbmath.org/authors/?q=ai:zieger.stephanSummary: We introduce a model for train routing on railway systems. Trains route through a network over time from a start to an end depot. They occupy consecutive nodes and edges corresponding to their length and block each other. We study the case where the depots are part of the network (internal) and the case where the depots are not part of the network (external).
The problem is a generalization of packet routing without buffers. We consider two different kinds of optimization problems. In the first, trains are only allowed to wait on predefined paths and in the second, trains are additionally allowed to shunt, i.e., change direction. In both cases, we are interested in minimizing the overall makespan.
For waiting instances, we find NP-hardness results even on unidirectional paths. We also show W[1]-hardness and lower bounds on the running time using the Exponential Time Hypothesis. For shunting instances, we show PSPACE-completeness results on honeycomb graphs and transfer the previously shown NP-hardness results. We present a polynomial time algorithm for a special subclass of unidirectional paths.
For the entire collection see [Zbl 1435.68020].On the minimum satisfiability problem.https://www.zbmath.org/1453.681152021-02-27T13:50:00+00:00"Arif, Umair"https://www.zbmath.org/authors/?q=ai:arif.umair"Benkoczi, Robert"https://www.zbmath.org/authors/?q=ai:benkoczi.robert-r"Gaur, Daya Ram"https://www.zbmath.org/authors/?q=ai:gaur.daya-ram"Krishnamurti, Ramesh"https://www.zbmath.org/authors/?q=ai:krishnamurti.rameshSummary: We characterize the optimal solution to the LP relaxation of the standard formulation for the minimum satisfiability problem. Based on the characterization, we give a \(O(nm^2)\) combinatorial algorithm to solve the fractional version of the minimum satisfiability problem optimally where \(n(m)\) is the number of variables (clauses). As a by-product, we obtain a \(2(1-1/2^k)\) approximation algorithm for the minimum satisfiability problem where \(k\) is the maximum number of literals in any clause. We also give a simple linear time 2 approximation algorithm.
For the entire collection see [Zbl 1435.68020].Maximum weighted edge biclique problem on bipartite graphs.https://www.zbmath.org/1453.681432021-02-27T13:50:00+00:00"Pandey, Arti"https://www.zbmath.org/authors/?q=ai:pandey.arti"Sharma, Gopika"https://www.zbmath.org/authors/?q=ai:sharma.gopika"Jain, Nivedit"https://www.zbmath.org/authors/?q=ai:jain.niveditSummary: For a graph \(G\), a complete bipartite subgraph of \(G\) is called a biclique of \(G\). For a weighted graph \(G=(V,E,w)\), where each edge \(e\in E\) has a weight \(w(e)\in\mathbb{R}\), the \textsc{Maximum Weighted Edge Biclique} (MWEB) problem is to find a biclique \(H\) of \(G\) such that \(\sum_{e\in E(H)}w(e)\) is maximum. The decision version of the MWEB problem is known to be NP-complete for bipartite graphs. In this paper, we show that the decision version of the MWEB problem remains NP-complete even if the input graph is a complete bipartite graph. On the positive side, if the weight of each edge is a positive real number in the input graph \(G\), then we show that the MWEB problem is \(O(n^2)\)-time solvable for bipartite permutation graphs, and \(O(m+n)\)-time solvable for chain graphs, which is a subclass of bipartite permutation graphs.
For the entire collection see [Zbl 1435.68020].Hardness and approximation for the geodetic set problem in some graph classes.https://www.zbmath.org/1453.681272021-02-27T13:50:00+00:00"Chakraborty, Dibyayan"https://www.zbmath.org/authors/?q=ai:chakraborty.dibyayan"Foucaud, Florent"https://www.zbmath.org/authors/?q=ai:foucaud.florent"Gahlawat, Harmender"https://www.zbmath.org/authors/?q=ai:gahlawat.harmender"Ghosh, Subir Kumar"https://www.zbmath.org/authors/?q=ai:ghosh.subir-kumar"Roy, Bodhayan"https://www.zbmath.org/authors/?q=ai:roy.bodhayanSummary: In this paper, we study the computational complexity of finding the geodetic number of graphs. A set of vertices \(S\) of a graph \(G\) is a geodetic set if any vertex of \(G\) lies in some shortest path between some pair of vertices from \(S\). The \textsc{Minimum Geodetic Set} (MGS) problem is to find a geodetic set with minimum cardinality. In this paper, we prove that solving MGS is NP-hard on planar graphs with a maximum degree six and line graphs. We also show that unless \(\mathrm{P}=\mathrm{NP}\), there is no polynomial time algorithm to solve MGS with sublogarithmic approximation factor (in terms of the number of vertices) even on graphs with diameter 2. On the positive side, we give an \(O(\sqrt[3]{n}\log n)\)-approximation algorithm for MGS on general graphs of order \(n\). We also give a 3-approximation algorithm for MGS on solid grid graphs which are planar.
For the entire collection see [Zbl 1435.68020].Hardness results of global total \(k\)-domination problem in graphs.https://www.zbmath.org/1453.681422021-02-27T13:50:00+00:00"Panda, B. S."https://www.zbmath.org/authors/?q=ai:panda.bhawani-sankar"Goyal, Pooja"https://www.zbmath.org/authors/?q=ai:goyal.poojaSummary: A set \(D\subseteq V_G\) of a graph \(G=(V_G,E_G)\) is called a global total \(k\)-dominating set of \(G\) if \(D\) is a total \(k\)-dominating set of both \(G\) and \(\overline{G}\), the complement of \(G\). The \textsc{Minimum Global Total \(k\)-Domination} problem is to find a global total \(k\)-dominating set of minimum cardinality of the input graph \(G\) and \textsc{Decide Global Total \(k\)-Domination} problem is the decision version of \textsc{Minimum Global Total \(k\)-Domination} problem. The \textsc{Decide Global Total \(k\)-Domination} problem is known to be NP-complete for general graphs. In this paper, we study the complexity of the \textsc{Minimum Global Total \(k\)-Domination} problem. We show the \textsc{Decide Global Total \(k\)-Domination} problem remains NP-complete for bipartite graphs and chordal graphs. Next, we show that the \textsc{Minimum Global Total \(k\)-Domination} problem admits a constant approximation algorithm for bounded degree graphs. Finally, we show that the \textsc{Minimum Global Total \(k\)-Domination} problem is APX-complete for bounded degree graphs.
For the entire collection see [Zbl 1435.68020].Geometric planar networks on bichromatic points.https://www.zbmath.org/1453.681222021-02-27T13:50:00+00:00"Bandyapadhyay, Sayan"https://www.zbmath.org/authors/?q=ai:bandyapadhyay.sayan"Banik, Aritra"https://www.zbmath.org/authors/?q=ai:banik.aritra"Bhore, Sujoy"https://www.zbmath.org/authors/?q=ai:bhore.sujoy-kumar"Nöllenburg, Martin"https://www.zbmath.org/authors/?q=ai:nollenburg.martinSummary: We study four classical graph problems -- Hamiltonian path, Traveling salesman, Minimum spanning tree, and Minimum perfect matching on geometric graphs induced by bichromatic (red and blue) points. These problems have been widely studied for points in the Euclidean plane, and many of them are \textsf{NP}-hard. In this work, we consider these problems in two restricted settings: (i) collinear points and (ii) equidistant points on a circle. We show that almost all of these problems can be solved in linear time in these constrained, yet non-trivial settings.
For the entire collection see [Zbl 1435.68020].Vertex-edge domination in unit disk graphs.https://www.zbmath.org/1453.681372021-02-27T13:50:00+00:00"Jena, Sangram K."https://www.zbmath.org/authors/?q=ai:jena.sangram-k"Das, Gautam K."https://www.zbmath.org/authors/?q=ai:das.gautam-kSummary: Let \(G=(V,E)\) be a simple graph. A set \(D\in V\) is called a vertex-edge dominating set of \(G\) if for each edge \(e=(u,v)\in E\), either \(u\) or \(v\) is in \(D\) or one vertex from their neighbor is in \(D\). Simply, a vertex \(v\in V\), vertex-edge dominates every edge \((u, v)\), as well as every edge adjacent to these edges. The vertex-edge dominating problem is to find a minimum vertex-edge dominating set of \(G\). Herein, we study the vertex-edge dominating set problem in unit disk graphs and prove that this problem is NP-hard in that class of graphs. We also show that the problem admits a polynomial time approximation scheme (PTAS) in unit disk graphs.
For the entire collection see [Zbl 1435.68020].Approximating modular decomposition is hard.https://www.zbmath.org/1453.681332021-02-27T13:50:00+00:00"Habib, Michel"https://www.zbmath.org/authors/?q=ai:habib.michel-a"Mouatadid, Lalla"https://www.zbmath.org/authors/?q=ai:mouatadid.lalla"Zou, Mengchuan"https://www.zbmath.org/authors/?q=ai:zou.mengchuanSummary: In order to understand underlying structural regularities in a graph, a basic and useful technique, known as modular decomposition, looks for subsets of vertices that have the \textbf{exact} same neighbourhood to the outside. These are known as modules and there exist linear-time algorithms to find them. This notion however is too strict, especially when dealing with graphs that arise from real world data. This is why it is important to relax this condition by allowing some noise in the data. However, generalizing modular decomposition is far from being obvious since most of the proposals lose the algebraic properties of modules and therefore most of the nice algorithmic consequences. In this paper we introduce the notion of \(\epsilon\)-\textbf{module} which seems to be a good compromise that maintains some of the algebraic structure. Among the main results in the paper, we show that minimal \(\epsilon\)-modules can be computed in polynomial time, on the other hand for maximal \(\epsilon\)-modules it is already NP-hard to compute if a graph admits an 1-parallel decomposition, i.e. one step of decomposition of \(\epsilon\)-module with \(\epsilon=1\).
For the entire collection see [Zbl 1435.68020].Synesthesizer: physical modelling and machine learning for a color-based synthesizer in virtual reality.https://www.zbmath.org/1453.000142021-02-27T13:50:00+00:00"Santini, Giovanni"https://www.zbmath.org/authors/?q=ai:santini.giovanniSummary: The Synesthesizer is a software synthesizer inspired to chromesthesia, that kind of synesthesia that connects sounds and colors. While chromesthesia usually produces color perception in response to sound stimulation, this synthesizer does the opposite: sound is generated according to color detection. More precisely, RGB (Red Green Blue) values are detected (one pixel at a time) and used to determine the behaviour of five physical models for virtual instruments. The motivation for creating such a synthesizer arose from the will to generate a timbral continuum out of the color continuum, allowing to explore the relation between color spectrum and sound spectrum. The Synesthesizer has two additional possible applications: \begin{itemize} \item[\(\bullet\)] A picture can become a sort of score; graphic scores can have a different source of interpretation; \item[\(\bullet\)] Given its intuitiveness, it might allow even non-experts to explore the possibilities of sound synthesis. \end{itemize} The current version has been developed in a Virtual Reality (VR) environment.
For the entire collection see [Zbl 1425.00082].Simple strategies versus optimal schedules in multi-agent patrolling.https://www.zbmath.org/1453.681962021-02-27T13:50:00+00:00"Kawamura, Akitoshi"https://www.zbmath.org/authors/?q=ai:kawamura.akitoshi"Soejima, Makoto"https://www.zbmath.org/authors/?q=ai:soejima.makotoSummary: Suppose that a set of mobile agents, each with a predefined maximum speed, want to patrol a fence together so as to minimize the longest time interval during which a point on the fence is left unvisited.
In [Lect. Notes Comput. Sci. 6942, 701--712 (2011; Zbl 1260.68397)] \textit{J. Czyzowicz} et al.
studied this problem for the settings where the fence is an interval (a line segment) and a circle, and conjectured that the following simple strategies are always optimal: for Interval Patrolling, the simple strategy partitions the fence into subintervals, one for each agent, and lets each agent move back and forth in the assigned subinterval with its maximum speed; for Circle Patrolling, the simple strategy is to choose a number \(r\), place the \(r\) fastest agents equidistantly around the circle, and move them at the speed of the \(r\)th agent. Surprisingly, these conjectures were then proved false: schedules were found (for some settings of maximum speeds) that slightly outperform the simple strategies.
In this paper, we are interested in the ratio between the performances of optimal schedules and simple strategies. For the two problems, we construct schedules that are 4/3 times (for Interval Patrolling) and 21/20 times (for Circle Patrolling) as good, respectively, as the simple strategies. We also propose a new variant, in which we want to patrol a single point under the constraint that each agent can only visit the point some predefined time after its previous visit. We obtain some similar ratio bounds and \(\mathsf{NP} \)-hardness results related to this problem.Dynamics of spiking map-based neural networks in problems of supervised learning.https://www.zbmath.org/1453.920182021-02-27T13:50:00+00:00"Pugavko, Mechislav M."https://www.zbmath.org/authors/?q=ai:pugavko.mechislav-m"Maslennikov, Oleg V."https://www.zbmath.org/authors/?q=ai:maslennikov.oleg-v"Nekorkin, Vladimir I."https://www.zbmath.org/authors/?q=ai:nekorkin.vladimir-iSummary: Recurrent networks of artificial spiking neurons trained to perform target functions are a perspective tool for understanding dynamic principles of information processing in computational neuroscience. Here, we develop a system of this type based on a map-based model of neural activity allowing for producing various biologically relevant regimes. Target signals used to supervisely train the network are sinusoid functions of different frequencies. Impacts of individual neuron dynamics, coupling strength, network size and other key parameters on the learning error are studied. Our findings suggest, among others, that firing rate heterogeneity as well as mixing of spiking and nonspiking regimes of neurons comprising the network can improve its performance for a wider range of target frequencies. At a single neuron activity level, successful training gives rise to well separated domains with qualitatively different dynamics.Online graph exploration on a restricted graph class: optimal solutions for tadpole graphs.https://www.zbmath.org/1453.682142021-02-27T13:50:00+00:00"Brandt, Sebastian"https://www.zbmath.org/authors/?q=ai:brandt.sebastian-f"Foerster, Klaus-Tycho"https://www.zbmath.org/authors/?q=ai:forster.klaus-tycho"Maurer, Jonathan"https://www.zbmath.org/authors/?q=ai:maurer.jonathan"Wattenhofer, Roger"https://www.zbmath.org/authors/?q=ai:wattenhofer.roger-pSummary: We study the problem of online graph exploration on undirected graphs, where a searcher has to visit every vertex and return to the origin. Once a new vertex is visited, the searcher learns of all neighboring vertices and the connecting edge weights.
The goal of such an exploration is to minimize its total cost, where each edge traversal incurs a cost of the corresponding edge weight. We investigate the problem on tadpole graphs (also known as dragons, kites), which consist of a cycle with an attached path. The construction
by \textit{S. Miyazaki} et al. [``The online graph exploration problem on restricted graphs'', IEICE Trans. Inf. Syst. E92.D, No. 9, 1620--1627 (2009; \url{doi:10.1587/transinf.E92.D.1620})]
can be extended to show that every online algorithm on these graphs must have a competitive ratio of \(2 - \varepsilon\), but the authors did not investigate non-unit edge weights. We show via amortized analysis that a greedy approach yields a matching competitive ratio of 2 on tadpole graphs, for arbitrary non-negative edge weights. Moreover, we also briefly discuss the topic of advice complexity on cycle and tadpole graphs.Deep learning for real-time crime forecasting and its ternarization.https://www.zbmath.org/1453.910852021-02-27T13:50:00+00:00"Wang, Bao"https://www.zbmath.org/authors/?q=ai:wang.bao"Yin, Penghang"https://www.zbmath.org/authors/?q=ai:yin.penghang"Bertozzi, Andrea Louise"https://www.zbmath.org/authors/?q=ai:bertozzi.andrea-louise"Brantingham, P. Jeffrey"https://www.zbmath.org/authors/?q=ai:brantingham.p-jeffrey"Osher, Stanley Joel"https://www.zbmath.org/authors/?q=ai:osher.stanley-j"Xin, Jack"https://www.zbmath.org/authors/?q=ai:xin.jack-xSummary: Real-time crime forecasting is important. However, accurate prediction of when and where the next crime will happen is difficult. No known physical model provides a reasonable approximation to such a complex system. Historical crime data are sparse in both space and time and the signal of interests is weak. In this work, the authors first present a proper representation of crime data. The authors then adapt the spatial temporal residual network on the well represented data to predict the distribution of crime in Los Angeles at the scale of hours in neighborhood-sized parcels. These experiments as well as comparisons with several existing approaches to prediction demonstrate the superiority of the proposed model in terms of accuracy. Finally, the authors present a ternarization technique to address the resource consumption issue for its deployment in real world. This work is an extension of our short conference proceeding paper [the first author et al., ``Deep learning for real time crime forecasting'', Preprint, \url{arXiv:1707.03340}].A faster cryptographer's Conspiracy Santa.https://www.zbmath.org/1453.680812021-02-27T13:50:00+00:00"Bultel, Xavier"https://www.zbmath.org/authors/?q=ai:bultel.xavier"Dreier, Jannik"https://www.zbmath.org/authors/?q=ai:dreier.jannik"Dumas, Jean-Guillaume"https://www.zbmath.org/authors/?q=ai:dumas.jean-guillaume"Lafourcade, Pascal"https://www.zbmath.org/authors/?q=ai:lafourcade.pascalSummary: In Conspiracy Santa, a variant of Secret Santa, a group of people offer each other Christmas gifts, where each member of the group receives a gift from the other members of the group. To that end, the members of the group form conspiracies, to decide on appropriate gifts, and usually divide the cost of each gift among all participants of that conspiracy. This requires to settle the shared expenses per conspiracy, so Conspiracy Santa can actually be seen as an aggregation of several shared expenses problems.
First, we show that the problem of finding a minimal number of transaction when settling shared expenses is NP-complete. Still, there exist good greedy approximations. Second, we present a greedy distributed secure solution to Conspiracy Santa. This solution allows a group of \(n\) people to share the expenses for the gifts in such a way that no participant learns the price of his gift, but at the same time notably reduces the number of transactions to \(2 \cdot n + 1\) with respect to a naïve aggregation of \(n \cdot(n - 2)\). Furthermore, our solution does not require a trusted third party, and can either be implemented physically (the participants are in the same room and exchange money using envelopes) or, over Internet, using a cryptocurrency.Who witnesses The Witness? Finding witnesses in The Witness is hard and sometimes impossible.https://www.zbmath.org/1453.680872021-02-27T13:50:00+00:00"Abel, Zachary"https://www.zbmath.org/authors/?q=ai:abel.zachary-r"Bosboom, Jeffrey"https://www.zbmath.org/authors/?q=ai:bosboom.jeffrey"Coulombe, Michael"https://www.zbmath.org/authors/?q=ai:coulombe.michael-j"Demaine, Erik D."https://www.zbmath.org/authors/?q=ai:demaine.erik-d"Hamilton, Linus"https://www.zbmath.org/authors/?q=ai:hamilton.linus"Hesterberg, Adam"https://www.zbmath.org/authors/?q=ai:hesterberg.adam"Kopinsky, Justin"https://www.zbmath.org/authors/?q=ai:kopinsky.justin"Lynch, Jayson"https://www.zbmath.org/authors/?q=ai:lynch.jayson"Rudoy, Mikhail"https://www.zbmath.org/authors/?q=ai:rudoy.mikhail"Thielen, Clemens"https://www.zbmath.org/authors/?q=ai:thielen.clemensSummary: We analyze the computational complexity of the many types of pencil-and-paper-style puzzles featured in the 2016 puzzle video game \textit{The Witness}. In all puzzles, the goal is to draw a simple path in a rectangular grid graph from a start vertex to a destination vertex. The different puzzle types place different constraints on the path: preventing some edges from being visited (broken edges); forcing some edges or vertices to be visited (hexagons); forcing some cells to have certain numbers of incident path edges (triangles); or forcing the regions formed by the path to be partially monochromatic (squares), have exactly two special cells (stars), or be singly covered by given shapes (polyominoes) and/or negatively counting shapes (antipolyominoes). We show that any \textit{one} of these clue types (except the first) is enough to make path finding NP-complete (``witnesses exist but are hard to find''), even for rectangular boards. Furthermore, we show that a final clue type (antibody), which necessarily ``cancels'' the effect of another clue in the same region, makes path finding \(\Sigma_2\)-complete (``witnesses do not exist''), even with a single antibody (combined with many anti/polyominoes), and the problem gets no harder with many antibodies. On the positive side, we give a polynomial-time algorithm for monomino clues, by reducing to hexagon clues on the boundary of the puzzle, even in the presence of broken edges, and solving ``subset Hamiltonian path'' for terminals on the boundary of an embedded planar graph in polynomial time.Tracks from hell -- when finding a proof may be easier than checking it.https://www.zbmath.org/1453.680902021-02-27T13:50:00+00:00"Almanza, Matteo"https://www.zbmath.org/authors/?q=ai:almanza.matteo"Leucci, Stefano"https://www.zbmath.org/authors/?q=ai:leucci.stefano"Panconesi, Alessandro"https://www.zbmath.org/authors/?q=ai:panconesi.alessandroSummary: We consider the popular smartphone game Trainyard: a puzzle game that requires the player to lay down tracks in order to route colored trains from departure stations to suitable arrival stations. While it is already known
[the authors, LIPIcs -- Leibniz Int. Proc. Inform. 49, Article 2, 14~p. (2016; Zbl 1369.68225); Theor. Comput. Sci. 748, 66--76 (2018; Zbl 1402.68078)]
that the problem of finding a solution to a given Trainyard instance (i.e., game level) is NP-hard, determining the computational complexity of \textit{checking} whether a candidate solution (i.e., a track layout) solves the level was left as an open problem. In this paper we prove that this verification problem is PSPACE-complete, thus implying that Trainyard players might not only have a hard time finding solutions to a given level, but they might even be unable to efficiently recognize them.On the exact complexity of polyomino packing.https://www.zbmath.org/1453.680922021-02-27T13:50:00+00:00"Bodlaender, Hans L."https://www.zbmath.org/authors/?q=ai:bodlaender.hans-l"van der Zanden, Tom C."https://www.zbmath.org/authors/?q=ai:van-der-zanden.tom-cSummary: We show that the problem of deciding whether a collection of polyominoes, each fitting in a \(2 \times O(\log n)\) rectangle, can be packed into a \(3 \times n\) box does not admit a \(2^{o (n / \log n)}\)-time algorithm, unless the Exponential Time Hypothesis fails. We also give an algorithm that attains this lower bound, solving any instance of polyomino packing with total area \(n\) in \(2^{O (n / \log n)}\) time. This establishes a tight bound on the complexity of \textsc{Polyomino Packing}, even in a very restricted case. In contrast, for a \(2 \times n\) box, we show that the problem can be solved in strongly subexponential time.Guarded computational type theory.https://www.zbmath.org/1453.030272021-02-27T13:50:00+00:00"Sterling, Jonathan"https://www.zbmath.org/authors/?q=ai:sterling.jonathan"Harper, Robert"https://www.zbmath.org/authors/?q=ai:harper.robertCan less be more? A game-theoretic analysis of filtering vs. investment.https://www.zbmath.org/1453.910162021-02-27T13:50:00+00:00"Sarabi, Armin"https://www.zbmath.org/authors/?q=ai:sarabi.armin"Naghizadeh, Parinaz"https://www.zbmath.org/authors/?q=ai:naghizadeh.parinaz"Liu, Mingyan"https://www.zbmath.org/authors/?q=ai:liu.mingyanSummary: In this paper we consider a single resource-constrained strategic adversary, who can arbitrarily distribute his resources over a set of nodes controlled by a single defender. The defender can (1) instruct nodes to filter incoming traffic from another node to reduce the chances of being compromised due to malicious traffic originating from that node, or (2) choose an amount of investment in security for each node in order to directly reduce loss, regardless of the origin of malicious traffic; leading to a filtering and an investment game, respectively. We shall derive and compare the Nash equilibria of both games for different resource constraints on the attacker. Our analysis and simulation results show that from either the attacker or the defender's point of view, none of the games perform uniformly better than the other, as utilities drawn at the equilibria are dependent on the costs associated with each action and the amount of resources available to the attacker. More interestingly, in games with highly resourceful attackers, not only the defender sustains higher loss, but the adversary is also at a disadvantage compared to less resourceful attackers.
For the entire collection see [Zbl 1325.68021].LMSO: a Curry-Howard approach to Church's synthesis via linear logic.https://www.zbmath.org/1453.030702021-02-27T13:50:00+00:00"Pradic, Pierre"https://www.zbmath.org/authors/?q=ai:pradic.pierre"Riba, Colin"https://www.zbmath.org/authors/?q=ai:riba.colinDifferential equation axiomatization. The impressive power of differential ghosts.https://www.zbmath.org/1453.030262021-02-27T13:50:00+00:00"Platzer, André"https://www.zbmath.org/authors/?q=ai:platzer.andre"Tan, Yong Kiam"https://www.zbmath.org/authors/?q=ai:tan.yong-kiamSecure message delivery games for device-to-device communications.https://www.zbmath.org/1453.910142021-02-27T13:50:00+00:00"Panaousis, Emmanouil"https://www.zbmath.org/authors/?q=ai:panaousis.emmanouil"Alpcan, Tansu"https://www.zbmath.org/authors/?q=ai:alpcan.tansu"Fereidooni, Hossein"https://www.zbmath.org/authors/?q=ai:fereidooni.hossein"Conti, Mauro"https://www.zbmath.org/authors/?q=ai:conti.mauroSummary: Device-to-device (D2D) communication is expected to be a key feature supported by next generation cellular networks. D2D can extend the cellular coverage allowing users to communicate when telecommunications infrastructure are highly congested or absent. In D2D networks, any message delivery from a source to a destination relies exclusively on intermediate devices. Each device can run different kinds of mobile security software, which offer protection against viruses and other harmful programs by using real-time scanning in every file entering the device. In this paper, we investigate the best D2D network path to deliver a potentially malicious message from a source to a destination. Although our primary objective is to increase security, we also investigate the contribution of energy costs and quality-of-service to the path selection. To this end, we propose the secure message delivery (SMD) protocol, whose main functionality is determined by the solution of the secure message delivery game (SMDG). This game is played between the defender (i.e., the D2D network) which abstracts all legitimate network devices and the attacker which abstracts any adversary that can inject different malicious messages into the D2D network in order, for instance, to infect a device with malware. Simulation results demonstrate the degree of improvement that SMD introduces as opposed to a shortest path routing protocol. This improvement has been measured in terms of the defender's expected cost as defined in SMDGs. This cost includes security expected damages, energy consumption incurred due to messages inspection, and the quality-of-service of the D2D message communications.
For the entire collection see [Zbl 1325.68021].An algebraic theory of Markov processes.https://www.zbmath.org/1453.080022021-02-27T13:50:00+00:00"Bacci, Giorgio"https://www.zbmath.org/authors/?q=ai:bacci.giorgio"Mardare, Radu"https://www.zbmath.org/authors/?q=ai:mardare.radu"Panangaden, Prakash"https://www.zbmath.org/authors/?q=ai:panangaden.prakash"Plotkin, Gordon"https://www.zbmath.org/authors/?q=ai:plotkin.gordon-dBoolean-valued semantics for the stochastic \(\lambda \)-calculus.https://www.zbmath.org/1453.030062021-02-27T13:50:00+00:00"Bacci, Giorgio"https://www.zbmath.org/authors/?q=ai:bacci.giorgio"Furber, Robert"https://www.zbmath.org/authors/?q=ai:furber.robert-w-j|furber.robert-d"Kozen, Dexter"https://www.zbmath.org/authors/?q=ai:kozen.dexter-c"Mardare, Radu"https://www.zbmath.org/authors/?q=ai:mardare.radu"Panangaden, Prakash"https://www.zbmath.org/authors/?q=ai:panangaden.prakash"Scott, Dana"https://www.zbmath.org/authors/?q=ai:scott.dana-sThe BMM symmetrising trace conjecture for groups \(G_{4}\), \(G_{5}\), \(G_{6}\), \(G_{7}\), \(G_{8}\).https://www.zbmath.org/1453.200082021-02-27T13:50:00+00:00"Boura, Christina"https://www.zbmath.org/authors/?q=ai:boura.christina"Chavli, Eirini"https://www.zbmath.org/authors/?q=ai:chavli.eirini"Chlouveraki, Maria"https://www.zbmath.org/authors/?q=ai:chlouveraki.maria"Karvounis, Konstantinos"https://www.zbmath.org/authors/?q=ai:karvounis.konstantinosSummary: We prove the BMM symmetrising trace conjecture for the exceptional irreducible complex reflection groups \(G_4\), \(G_5\), \(G_6\), \(G_7\), \(G_8\) using a combination of algorithms programmed in different languages (\texttt{C++}, SAGE, GAP3, \textit{Mathematica}). Our proof depends on the choice of a suitable basis for the generic Hecke algebra associated with each group.Ideal regular languages and strongly connected synchronizing automata.https://www.zbmath.org/1453.681012021-02-27T13:50:00+00:00"Reis, Rogério"https://www.zbmath.org/authors/?q=ai:reis.rogerio"Rodaro, Emanuele"https://www.zbmath.org/authors/?q=ai:rodaro.emanueleSummary: We introduce the notion of a reset left regular decomposition of an ideal regular language, and we prove that the category formed by these decompositions with the adequate set of morphisms is equivalent to the category of strongly connected synchronizing automata. We show that every ideal regular language has at least one reset left regular decomposition. As a consequence, every ideal regular language is the set of synchronizing words of some strongly connected synchronizing automaton. Furthermore, this one-to-one correspondence allows us to introduce the notion of reset decomposition complexity of an ideal from which follows a reformulation of Černý's conjecture in purely language theoretic terms. Finally, we present and characterize a subclass of ideal regular languages for which a better upper bound for the reset decomposition complexity holds with respect to the general case.Lossless data hiding for binary document images using \(n\)-pairs pattern.https://www.zbmath.org/1453.680682021-02-27T13:50:00+00:00"Kim, Cheonshik"https://www.zbmath.org/authors/?q=ai:kim.cheonshik"Baek, Jinsuk"https://www.zbmath.org/authors/?q=ai:baek.jinsuk"Fisher, Paul S."https://www.zbmath.org/authors/?q=ai:fisher.paul-sSummary: Lossless data embedding theory has entered a new era for data hiding and information security. In a lossless scheme, the original data and the embedded data should be completely recoverable. Our \(n\)-pairs pattern method is a significant advance in lossless data hiding schemes. This paper shows that the proposed \(n\)-pairs pattern method can achieve greater embedding capacity while keeping distortion at the same level as the PS-K method (Pattern Substitution by pseudo random number generator to produce a key K). The performance of the \(n\)-pairs pattern method is thus shown to be better than the performance of PS-K.
For the entire collection see [Zbl 1318.68031].Fixed point techniques in analog systems.https://www.zbmath.org/1453.680782021-02-27T13:50:00+00:00"Poças, Diogo"https://www.zbmath.org/authors/?q=ai:pocas.diogo"Zucker, Jeffery"https://www.zbmath.org/authors/?q=ai:zucker.jeffery-iSummary: Analog computation is concerned with continuous rather than discrete spaces. Most of the physical processes arising in nature are modeled by differential equations, either ordinary (example: spring/mass/damper system) or partial (example: heat diffusion). In analog computability, the existence of an effective way to obtain solutions (either exact or approximate) of these systems is essential. We develop a framework in which the solutions can be seen as fixed points of certain operators on continuous data streams, using the framework of Fréchet spaces. We apply a fixed point construction to retrieve these solutions and present sufficient conditions on the operators and initial inputs to ensure existence and uniqueness of these corresponding fixed points.
For the entire collection see [Zbl 1362.00026].Schnorr randomness for noncomputable measures.https://www.zbmath.org/1453.030442021-02-27T13:50:00+00:00"Rute, Jason"https://www.zbmath.org/authors/?q=ai:rute.jasonSummary: This paper explores a novel definition of Schnorr randomness for noncomputable measures. We say \(x\) is uniformly Schnorr \(\mu\)-random if \(t(\mu,x)<\infty\) for all lower semicomputable functions \(t(\mu,x)\) such that \(\mu \mapsto \int t(\mu,x)d\mu(x)\) is computable. We prove a number of theorems demonstrating that this is the correct definition which enjoys many of the same properties as Martin-Löf randomness for noncomputable measures. Nonetheless, a number of our proofs significantly differ from the Martin-Löf case, requiring new ideas from computable analysis.On products of shifts in arbitrary fields.https://www.zbmath.org/1453.110212021-02-27T13:50:00+00:00"Warren, Audie"https://www.zbmath.org/authors/?q=ai:warren.audieSummary: We adapt the approach of \textit{M. Rudnev}, \textit{G. Shakan} and \textit{I. D. Shkredov} [Proc. Am. Math. Soc. 148, No. 4, 1467--1479 (2020; Zbl 1442.11029)] to prove that in an arbitrary field \(\mathbb{F}\), for all \(A \subset \mathbb{F}\) finite with \(|A| < p^{1/4}\) if \(p:= \operatorname{char}(\mathbb{F})\) is positive, we have \[ |A(A+1)| \gg \frac{|A|^{11/9}}{(\log|A|)^{7/6}}, \quad |AA| + |(A+1)(A+1)| \gg \frac{|A|^{11/9}}{(\log|A|)^{7/6}}. \] This improves upon the exponent of \(\frac65\) given by an incidence theorem of \textit{S. Stevens} and \textit{F. de Zeeuw} [Bull. Lond. Math. Soc. 49, No. 5, 842--858 (2017; Zbl 1388.51002)].The binomial equivalence classes of finite words.https://www.zbmath.org/1453.681452021-02-27T13:50:00+00:00"Lejeune, Marie"https://www.zbmath.org/authors/?q=ai:lejeune.marie"Rigo, Michel"https://www.zbmath.org/authors/?q=ai:rigo.michel"Rosenfeld, Matthieu"https://www.zbmath.org/authors/?q=ai:rosenfeld.matthieuAuthors' abstract: Two finite words \(u\) and \(v\) are \(k\)-binomially equivalent if, for each word \(x\) of length at most \(k\), \(x\) appears the same number of times as a subsequence (i.e., as a scattered subword) of both \(u\) and \(v\). This notion generalizes abelian equivalence. In this paper, we study the equivalence classes induced by the \(k\)-binomial equivalence. We provide an algorithm generating the 2-binomial equivalence class of a word. For \(k \geq 2\) and alphabet of 3 or more symbols, the language made of lexicographically least elements of every \(k\)-binomial equivalence class and the language of singletons, i.e., the words whose \(k\)-binomial equivalence class is restricted to a single element, are shown to be non-context-free. As a consequence of our discussions, we also prove that the submonoid generated by the generators of the free nil-2 group (also called free nilpotent group of class 2) on \(m\) generators is isomorphic to the quotient of the free monoid \(\{1, \ldots, m\}^*\) by the 2-binomial equivalence.
Reviewer: Zoltán Kása (Cluj-Napoca)Pseudorandom pseudo-distributions with near-optimal error for read-once branching programs.https://www.zbmath.org/1453.682112021-02-27T13:50:00+00:00"Braverman, Mark"https://www.zbmath.org/authors/?q=ai:braverman.mark"Cohen, Gil"https://www.zbmath.org/authors/?q=ai:cohen.gil"Garg, Sumegha"https://www.zbmath.org/authors/?q=ai:garg.sumeghaSimple strategies versus optimal schedules in multi-agent patrolling.https://www.zbmath.org/1453.681952021-02-27T13:50:00+00:00"Kawamura, Akitoshi"https://www.zbmath.org/authors/?q=ai:kawamura.akitoshi"Soejima, Makoto"https://www.zbmath.org/authors/?q=ai:soejima.makotoSummary: Suppose that we want to patrol a fence (line segment) using \(k\) mobile agents with given speeds \(v_1,\dots, v_k\) so that every point on the fence is visited by an agent at least once in every unit time period. A simple strategy where the \(i\)th agent moves back and forth in a segment of length \(v_i / 2\) patrols the length \((v_1 + \cdots + v_k) / 2\), but it has been shown recently that this is not always optimal. Thus a natural question is to determine the smallest \(c\) such that a fence of length \(c (v_1 + \cdots + v_k) / 2\) cannot be patrolled. We give an example showing \(c \geq 4 / 3\) (and conjecture that this is the best possible).{
}We also consider a variant of this problem where we want to patrol a circle and the agents can move only clockwise. We can patrol a circle of perimeter \(r v_r\) by a simple strategy where the \(r\) fastest agents move at the same speed, but it has been shown recently that this is not always optimal. We conjecture that this is not even
a constant-approximation strategy. To tackle this conjecture, we relate it to what we call \textit{constant gap families}. Using this relation, we give another example where the simple strategy is not optimal.{
}We propose another variant where we want to patrol a single point under the constraint that for each \(i = 1,\dots, k\), the time between two consecutive visits of agent \(i\) should be \(a_i\) or longer. This problem can be reduced to the discretized version where the \(a_i\) are integers and the goal is to visit the point at every integer time. It is easy to see that this discretized patrolling is impossible if \(1 / a_1 + \cdots + 1 / a_k < 1\), and that there is a simple strategy if \(1 / a_1 + \cdots + 1 / a_k \geq 2\). Thus we are interested in the smallest \(c\) such that patrolling is always possible if \(1 / a_1 + \cdots + 1 / a_k \geq c\). We prove that \(\alpha \leq c < 1.546\), where \(\alpha = 1.264 \ldots \) (we conjecture that \(c = \alpha \)). We also discuss the computational
complexity of related problems.
For the entire collection see [Zbl 1316.68024].Towards an institutional framework for heterogeneous formal development in UML -- a position paper.https://www.zbmath.org/1453.681072021-02-27T13:50:00+00:00"Knapp, Alexander"https://www.zbmath.org/authors/?q=ai:knapp.alexander"Mossakowski, Till"https://www.zbmath.org/authors/?q=ai:mossakowski.till"Roggenbach, Markus"https://www.zbmath.org/authors/?q=ai:roggenbach.markusSummary: We present a framework for formal software development with UML. In contrast to previous approaches to equipping UML with a formal semantics, we propose an institution-based heterogeneous approach. This can express suitable formal semantics of the different UML diagram types directly, without the need to map everything to one specific formalism (let it be first-order logic or graph grammars). We provide ideas how different aspects of the formal development process can be coherently formalised, ranging from requirements over design and Hoare-style conditions on code to the implementation itself. The framework can be used to verify consistency of different UML diagrams both horizontally (e.g., consistency among various requirements) as well as vertically (e.g., correctness of design or implementation w.r.t. the requirements).
For the entire collection see [Zbl 1312.68005].Institutions for OCL-like expression languages.https://www.zbmath.org/1453.681062021-02-27T13:50:00+00:00"Knapp, Alexander"https://www.zbmath.org/authors/?q=ai:knapp.alexander"Cengarle, María Victoria"https://www.zbmath.org/authors/?q=ai:cengarle.maria-victoriaSummary: In 2008, Martin Wirsing initiated the project of conceiving the ``Unified Modeling Language'' (UML) as a heterogeneous modelling language. He proposed to use the theory of heterogeneous institutions for providing individual semantics to each sub-language, that can then be integrated using institution (co-)morphisms. In particular, the proposal allows for seamlessly capturing the notorious semantic variation points of UML with mathematical rigour. In this line of research, we contribute an institutional framework for the ``Object Constraint Language'' (OCL), UML's language for expressing constraints.
For the entire collection see [Zbl 1312.68005].Generate \& check method for verifying transition systems in CafeOBJ.https://www.zbmath.org/1453.681112021-02-27T13:50:00+00:00"Futatsugi, Kokichi"https://www.zbmath.org/authors/?q=ai:futatsugi.kokichiSummary: An interactive theorem proving method for the verification of infinite state transition systems is described.
The state space of a transition system is defined as a quotient set (i.e. a set of equivalence classes) of terms of a topmost sort State, and the transitions are defined with conditional rewrite rules over the quotient set. A property to be verified is either (1) an invariant (i.e. a state predicate that is valid for all reachable states) or (2) a (\(p\) leads-to \(q\)) property for two state predicates \(p\) and \(q\), where (\(p\) leads-to \(q\)) means that from any reachable state \(s\) with \((p(s) = \text{true})\) the system will get into a state \(t\) with \((q(t) = \text{true})\) no matter what transition sequence is taken.
Verification is achieved by developing proof scores in CafeOBJ . Sufficient verification conditions are formalized for verifying invariants and (\(p\) leads-to \(q\)) properties. For each verification condition, a proof score is constructed to (1) generate a finite set of state patterns that covers all possible infinite states and (2) check validity of the verification condition for all the covering state patterns by reductions.
The method achieves significant automation of proof score developments.
For the entire collection see [Zbl 1312.68005].Efficiently realizing interval sequences.https://www.zbmath.org/1453.051232021-02-27T13:50:00+00:00"Bar-Noy, Amotz"https://www.zbmath.org/authors/?q=ai:bar-noy.amotz"Choudhary, Keerti"https://www.zbmath.org/authors/?q=ai:choudhary.keerti"Peleg, David"https://www.zbmath.org/authors/?q=ai:peleg.david"Rawitz, Dror"https://www.zbmath.org/authors/?q=ai:rawitz.drorPartial valuation structures for qualitative soft constraints.https://www.zbmath.org/1453.681572021-02-27T13:50:00+00:00"Schiendorfer, Alexander"https://www.zbmath.org/authors/?q=ai:schiendorfer.alexander"Knapp, Alexander"https://www.zbmath.org/authors/?q=ai:knapp.alexander"Steghöfer, Jan-Philipp"https://www.zbmath.org/authors/?q=ai:steghofer.jan-philipp"Anders, Gerrit"https://www.zbmath.org/authors/?q=ai:anders.gerrit"Siefert, Florian"https://www.zbmath.org/authors/?q=ai:siefert.florian"Reif, Wolfgang"https://www.zbmath.org/authors/?q=ai:reif.wolfgangSummary: Soft constraints have proved to be a versatile tool for the specification and implementation of decision making in adaptive systems. A plethora of formalisms have been devised to capture different notions of preference. Wirsing et al. have proposed partial valuation structures as a unifying algebraic structure for several soft constraint formalisms, including quantitative and qualitative ones, which, in particular, supports lexicographic products in a broad range of cases. We demonstrate the versatility of partial valuation structures by integrating the qualitative formalism of constraint relationships as well as the hybrid concept of constraint hierarchies. The latter inherently relies on lexicographic combinations, but it turns out that not all can be covered directly by partial valuation structures. We therefore investigate a notion for simulating partial valuation structures not amenable to lexicographic combinations by better suited ones. The concepts are illustrated by a case study in decentralized energy management.
For the entire collection see [Zbl 1312.68005].Localized operational termination in general logics.https://www.zbmath.org/1453.680532021-02-27T13:50:00+00:00"Lucas, Salvador"https://www.zbmath.org/authors/?q=ai:lucas.salvador"Meseguer, José"https://www.zbmath.org/authors/?q=ai:meseguer.joseSummary: Termination can be thought of as the property of programs ensuring that every input is given an answer in finite time. There are, however, many different (combinations of) programming paradigms and languages for these paradigms. Is a common formal definition of termination of programs in any (or most) of these programming languages possible? The notion of operational termination provides a general definition of termination which relies on the logic-based description of (the operational semantics of) a programming language. The key point is capturing termination as the absence of infinite inference, that is: all proof attempts must either successfully terminate, or they must fail in finite time. This global notion is well-suited for most declarative languages, where programs are theories in a logic whose inference system is specialized to each theory to characterize its computations. Other programming languages (e.g., imperative languages) and applications (e.g., the evaluation of specific expressions and goals in functional and logic programs) require a more specialized treatment which pays attention not just to theories, but to specific formulas to be proved within the given theory. For instance, the execution of an imperative program can be viewed as a proof of an specific formula (representing the program) within the computational logic describing the operational semantics of the programming language. In such cases, an appropriate definition of termination should focus on proving the absence of infinite proofs for computations localized to specific goals. In this paper we generalize the global notion of operational termination to this new setting and adapt the recently introduced OT-framework for mechanizing proofs of operational termination to support proofs of localized operational termination.
For the entire collection see [Zbl 1312.68005].Simplified coalgebraic trace equivalence.https://www.zbmath.org/1453.681122021-02-27T13:50:00+00:00"Kurz, Alexander"https://www.zbmath.org/authors/?q=ai:kurz.alexander"Milius, Stefan"https://www.zbmath.org/authors/?q=ai:milius.stefan"Pattinson, Dirk"https://www.zbmath.org/authors/?q=ai:pattinson.dirk"Schröder, Lutz"https://www.zbmath.org/authors/?q=ai:schroder.lutzSummary: The analysis of concurrent and reactive systems is based to a large degree on various notions of process equivalence, ranging, on the so-called linear-time/branching-time spectrum, from fine-grained equivalences such as strong bisimilarity to coarse-grained ones such as trace equivalence. The theory of concurrent systems at large has benefited from developments in coalgebra, which has enabled uniform definitions and results that provide a common umbrella for seemingly disparate system types including non-deterministic, weighted, probabilistic, and game-based systems. In particular, there has been some success in identifying a generic coalgebraic theory of bisimulation that matches known definitions in many concrete cases. The situation is currently somewhat less settled regarding trace equivalence. A number of coalgebraic approaches to trace equivalence have been proposed, none of which however cover all cases of interest; notably, all these approaches depend on explicit termination, which is not always imposed in standard systems, e.g. labelled transition systems. Here, we discuss a joint generalization of these approaches based on embedding functors modelling various aspects of the system, such as transition and braching, into a global monad; this approach appears to cover all cases considered previously and some additional ones, notably standard and probabilistic labelled transition systems.
For the entire collection see [Zbl 1312.68005].Logical relations and nondeterminism.https://www.zbmath.org/1453.680482021-02-27T13:50:00+00:00"Hofmann, Martin"https://www.zbmath.org/authors/?q=ai:hofmann.martin.1Summary: The purpose of this article is to illustrate some technical difficulties encountered when trying to extend a logical relation to the Hoare powerdomain. We give a partial solution and some applications. Our vehicle is a simple call-by-value programming language with binary nondeterministic choice. We define both a big-step operational semantics and a denotational semantics using the Hoare powerdomain. Using our logical relation we then show equivalence of the two semantics in the sense of computational adequacy and some type-dependent program equivalences.
For the entire collection see [Zbl 1312.68005].Division by zero in common meadows.https://www.zbmath.org/1453.681042021-02-27T13:50:00+00:00"Bergstra, Jan A."https://www.zbmath.org/authors/?q=ai:bergstra.jan-a"Ponse, Alban"https://www.zbmath.org/authors/?q=ai:ponse.albanSummary: Common meadows are fields expanded with a total multiplicative inverse function. Division by zero produces an additional value denoted with ``\({\mathbf {a}}\)'' that propagates through all operations of the meadow signature (this additional value can be interpreted as an error element). We provide a basis theorem for so-called common cancellation meadows of characteristic zero, that is, common meadows of characteristic zero that admit a certain cancellation law.
For the entire collection see [Zbl 1312.68005].Modal satisfiability via SMT solving.https://www.zbmath.org/1453.030152021-02-27T13:50:00+00:00"Areces, Carlos"https://www.zbmath.org/authors/?q=ai:areces.carlos"Fontaine, Pascal"https://www.zbmath.org/authors/?q=ai:fontaine.pascal"Merz, Stephan"https://www.zbmath.org/authors/?q=ai:merz.stephanSummary: Modal logics extend classical propositional logic, and they are robustly decidable. Whereas most existing decision procedures for modal logics are based on tableau constructions, we propose a framework for obtaining decision procedures by adding instantiation rules to standard SAT and SMT solvers. Soundness, completeness, and termination of the procedures can be proved in a uniform and elementary way for the basic modal logic and some extensions.
For the entire collection see [Zbl 1312.68005].The broad view: how to spawn a radical organizational transformation ``en passant''.https://www.zbmath.org/1453.680062021-02-27T13:50:00+00:00"Hussmann, Heinrich"https://www.zbmath.org/authors/?q=ai:hussmann.heinrichSummary: This short paper is a rather personal account on the process, which in the end led to the effect that the author moved from a scholar of an influential person to a colleague of him. It tries to tell the story how a new organizational backbone for the Institute of Informatics at LMU was created by predictive thinking, and it is a hommage to the ability to switch from technical detail to a very broad but nevertheless precise analysis of development opportunities for an organization.
For the entire collection see [Zbl 1312.68005].From formal logic through program transformations to system dynamics: 40 years of meeting points with Martin Wirsing.https://www.zbmath.org/1453.680042021-02-27T13:50:00+00:00"Hesse, Wolfgang"https://www.zbmath.org/authors/?q=ai:hesse.wolfgangFrom the text: My first meeting with Martin Wirsing is of rather virtual nature: we both have the same academic background and got our diplomas from the same institution, the Institute of Mathematical Logic of LMU Munich directed by Kurt Schütte. But we did not (yet) meet personally: while I worked on my diploma thesis on \(\mu \)-recursive functions and the non-eliminability of some ugly functionals in intricate number theory, Martin entered the institute as a student and left it some years later having completed two theses (diploma and Ph.D.) on similarly mystical problems such as (un-)decidability of (sub-)classes of formulae in first-order predicate logic.
For the entire collection see [Zbl 1312.68005].Ode to the PST.https://www.zbmath.org/1453.680052021-02-27T13:50:00+00:00"Hölzl, Matthias"https://www.zbmath.org/authors/?q=ai:holzl.matthias-m"Koch, Nora"https://www.zbmath.org/authors/?q=ai:koch.nora"Mayer, Philip"https://www.zbmath.org/authors/?q=ai:mayer.philip"Schroeder, Andreas"https://www.zbmath.org/authors/?q=ai:schroeder.andreas"Belzner, Lenz"https://www.zbmath.org/authors/?q=ai:belzner.lenz"Busch, Marianne"https://www.zbmath.org/authors/?q=ai:busch.marianne"Fasching, Anton"https://www.zbmath.org/authors/?q=ai:fasching.anton"Klarl, Annabelle"https://www.zbmath.org/authors/?q=ai:klarl.annabelle"Kroiss, Christian"https://www.zbmath.org/authors/?q=ai:kroiss.christian"Raed, Laith"https://www.zbmath.org/authors/?q=ai:raed.laithFrom the text: 1992! Internet in Germany was in its infancy, object oriented programming wasn't well-known yet, and even the European Union, which has funded so many of our research efforts lately, had not been formed.
Computing was certainly not mainstream when Martin Wirsing took the post of full professor at the Ludwig-Maximilians-Universität München in that year. The chair for Programming \& Software Engineering (Lehrstuhl für Programmierung und Softwaretechnik, PST) was created along with his appointment. Indeed, the first Diplom course of studies in computer science (Informatik) at LMU had been created only the year before. The institute was still situated in the Leopoldstraße in Schwabing, only moving to the Institute am Englischen Garten in the Oettingenstraße in 1996 -- a beautiful location set right in the English Garden in Munich, five minutes away from one of the main tourist spots, the Biergarten am Chinesischen Turm. And with a history -- the building was the site of the former Radio Free Europe, a radio station funded by the U.S. Government with its own history (including a bomb attack in 1981 on the very building where PST is now located).
Fast-forward to 2015, where all of us are pretty much dependent on the computers in our pockets -- and where it is clear that computing has had, and will have in the future, a tremendous influence on our society and our way of life. It certainly has been an interesting time for computer scientists!
This development forms the framework for a personal story -- 24 years in which Martin has headed the PST group at LMU; 24 years in which he has supervised over two dozen doctoral theses, participated in over nearly 40 research projects, and taught countless students; and 24 years in which he has organized a tiresome number of hikes in the Bavarian alps...
In this short expression of thanks/collection of travel notes/feeble summary attempt (delete to taste) we shall revisit the PST history in the fields of science and education (Section 2), research projects (Section 3), supervision of doctorates and habilitations (Section 4), and the PST as a fun place to work (Section 5). We invite you to come with us on this journey!
For the entire collection see [Zbl 1312.68005].Towards partial order reductions for strategic ability.https://www.zbmath.org/1453.681972021-02-27T13:50:00+00:00"Jamroga, Wojciech"https://www.zbmath.org/authors/?q=ai:jamroga.wojciech"Penczek, Wojciech"https://www.zbmath.org/authors/?q=ai:penczek.wojciech"Sidoruk, Teofil"https://www.zbmath.org/authors/?q=ai:sidoruk.teofil"Dembiński, Piotr"https://www.zbmath.org/authors/?q=ai:dembinski.piotr"Mazurkiewicz, Antoni"https://www.zbmath.org/authors/?q=ai:mazurkiewicz.antoni-wSummary: We propose a general semantics for strategic abilities of agents in asynchronous systems, with and without perfect information. Based on the semantics, we show some general complexity results for verification of strategic abilities in asynchronous interaction. More importantly, we develop a methodology for \textit{partial order reduction} in verification of agents with imperfect information. We show that the reduction preserves an important subset of strategic properties, with as well as without the fairness assumption. We also demonstrate the effectiveness of the reduction on a number of benchmarks. Interestingly, the reduction does not work for strategic abilities under perfect information.Tree path majority data structures.https://www.zbmath.org/1453.680622021-02-27T13:50:00+00:00"Gagie, Travis"https://www.zbmath.org/authors/?q=ai:gagie.travis"He, Meng"https://www.zbmath.org/authors/?q=ai:he.meng"Navarro, Gonzalo"https://www.zbmath.org/authors/?q=ai:navarro.gonzalo"Ochoa, Carlos"https://www.zbmath.org/authors/?q=ai:ochoa.carlosSummary: We present the first solution to finding \(\tau \)-majorities on tree paths. Given a tree of \(n\) nodes, each with a label from \([1 . . \sigma]\), and a fixed threshold \(0 < \tau < 1\), such a query gives two nodes \(u\) and \(v\) and asks for all the labels that appear more than \(\tau \cdot | P_{u v} |\) times in the path \(P_{u v}\) from \(u\) to \(v\), where \(| P_{u v} |\) denotes the number of nodes in \(P_{u v}\). Note that the answer to any query is of size up to \(1 / \tau \). On a \(w\)-bit RAM, we obtain a linear-space data structure with \(O((1 / \tau) \lg \lg_w \sigma)\) query time, which is worst-case optimal for polylogarithmic-sized alphabets. We also describe two succinct-space solutions with query time \(O((1 / \tau) \lg^\ast n \lg \lg_w \sigma)\). One uses \(2 n H + 4 n + o(n)(H + 1)\) bits, where \(H \leq \lg \sigma\) is the entropy of the label distribution; the other uses \(n H + O(n) + o(n H)\) bits. By using just \(o(n \lg \sigma)\) extra bits, our succinct structures allow \(\tau\) to be specified at query time. We obtain analogous results to find a \(\tau \)-minority, that is, an element that appears between 1 and \(\tau \cdot | P_{u v} |\) times in \(P_{u v}\).Two more characterizations of \(K\)-triviality.https://www.zbmath.org/1453.030412021-02-27T13:50:00+00:00"Greenberg, Noam"https://www.zbmath.org/authors/?q=ai:greenberg.noam"Miller, Joseph S."https://www.zbmath.org/authors/?q=ai:miller.joseph-s"Monin, Benoit"https://www.zbmath.org/authors/?q=ai:monin.benoit"Turetsky, Daniel"https://www.zbmath.org/authors/?q=ai:turetsky.daniel-dSummary: We give two new characterizations of \(K\)-triviality. We show that if for all \(Y\) such that \(\Omega\) is \(Y\)-random, \(\Omega\) is \((Y\oplus A)\)-random, then \(A\) is \(K\)-trivial. The other direction was proved by Stephan and Yu, giving us the first titular characterization of \(K\)-triviality and answering a question of Yu. We also prove that if \(A\) is \(K\)-trivial, then for all \(Y\) such that \(\Omega\) is \(Y\)-random, \((Y\oplus A)\equiv_{\mathrm{LR}}Y\). This answers a question of Merkle and Yu. The other direction is immediate, so we have the second characterization of \(K\)-triviality.
The proof of the first characterization uses a new cupping result. We prove that if \(A\nleq_{\mathrm{LR}}B\), then for every set \(X\) there is a \(B\)-random set \(Y\) such that \(X\) is computable from \(Y\oplus A\).Succinct data structures for families of interval graphs.https://www.zbmath.org/1453.680602021-02-27T13:50:00+00:00"Acan, Hüseyin"https://www.zbmath.org/authors/?q=ai:acan.huseyin"Chakraborty, Sankardeep"https://www.zbmath.org/authors/?q=ai:chakraborty.sankardeep"Jo, Seungbum"https://www.zbmath.org/authors/?q=ai:jo.seungbum"Satti, Srinivasa Rao"https://www.zbmath.org/authors/?q=ai:satti.srinivasa-raoSummary: We consider the problem of designing succinct data structures for interval graphs with \(n\) vertices while supporting degree, adjacency, neighborhood and shortest path queries in optimal time. Towards showing succinctness, we first show that at least \(n\log_2{n}-2n\log_2\log_2 n-O(n)\) bits are necessary to represent any unlabeled interval graph \(G\) with \(n\) vertices, answering an open problem of
\textit{J. C. Yang} and \textit{N. Pippenger} [Proc. Am. Math. Soc., Ser. B 4, 1--3 (2017; Zbl 1358.05141)].
This is augmented by a data structure of size \(n\log_2{n}+O(n)\) bits while supporting not only the above queries optimally but also capable of executing various combinatorial algorithms (like proper coloring, maximum independent set etc.) on interval graphs efficiently. Finally, we extend our ideas to other variants of interval graphs, for example, proper/unit, \(k\)-improper interval graphs, and circular-arc graphs, and design succinct data structures for these graph classes as well along with supporting queries on them efficiently.
For the entire collection see [Zbl 1419.68009].Resilience of routing in parallel link networks.https://www.zbmath.org/1453.910282021-02-27T13:50:00+00:00"Altman, Eitan"https://www.zbmath.org/authors/?q=ai:altman.eitan"Singhal, Aniruddha"https://www.zbmath.org/authors/?q=ai:singhal.aniruddha"Touati, Corinne"https://www.zbmath.org/authors/?q=ai:touati.corinne"Li, Jie"https://www.zbmath.org/authors/?q=ai:li.jie.2|li.jie.1Summary: We revisit in this paper the resilience problem of routing traffic in a parallel link network model with a malicious player using a game theoretic framework. Consider that there are two players in the network: the first player wishes to split its traffic so as to minimize its average delay, which the second player, i.e., the malicious player, tries to maximize. The first player has a demand constraint on the total traffic it routes. The second player controls the link capacities: it can decrease by some amount the capacity of each link under a constraint on the sum of capacity degradation. We first show that the average delay function is convex both in traffic and in capacity degradation over the parallel links and thus does not have a saddle point. We identify best responses strategies of each player and compute both the max-min and the min-max values of the game. We are especially interested in the min max strategy as it guarantees the best performance under worst possible link capacity degradation. It thus allows to obtain routing strategies that are resilient and robust. We compare the results of the min-max to those obtained under the max-min strategies. We provide stable algorithms for computing both max-min and min-max strategies as well as for best responses.
For the entire collection see [Zbl 1347.68013].An integrated first-order theory of points and intervals over linear orders. I.https://www.zbmath.org/1453.030132021-02-27T13:50:00+00:00"Conradie, Willem"https://www.zbmath.org/authors/?q=ai:conradie.willem-j"Durhan, Salih"https://www.zbmath.org/authors/?q=ai:durhan.salih"Sciavicco, Guido"https://www.zbmath.org/authors/?q=ai:sciavicco.guidoSummary: There are two natural and well-studied approaches to temporal ontology and reasoning: point-based and interval-based. Usually, interval-based temporal reasoning deals with points as a particular case of duration-less intervals. A recent result by \textit{P. Balbiani} et al. [Electron. Notes Theor. Comput. Sci. 278, 31--45 (2011; Zbl 1347.03030)] presented an explicit two-sorted point-interval temporal framework in which time instants (points) and time periods (intervals) are considered on a par, allowing the perspective to shift between these within the formal discourse. We consider here two-sorted first-order languages based on the same principle, and therefore including relations, as first studied by Reich, among others, between points, between intervals, and inter-sort. We give complete classifications of its sub-languages in terms of relative expressive power, thus determining how many, and which, are the intrinsically different extensions of two-sorted first-order logic with one or more such relations. This approach roots out the classical problem of whether or not points should be included in a interval-based semantics.Independent set reconfiguration parameterized by modular-width.https://www.zbmath.org/1453.681232021-02-27T13:50:00+00:00"Belmonte, Rémy"https://www.zbmath.org/authors/?q=ai:belmonte.remy"Hanaka, Tesshu"https://www.zbmath.org/authors/?q=ai:hanaka.tesshu"Lampis, Michael"https://www.zbmath.org/authors/?q=ai:lampis.michael"Ono, Hirotaka"https://www.zbmath.org/authors/?q=ai:ono.hirotaka"Otachi, Yota"https://www.zbmath.org/authors/?q=ai:otachi.yotaSummary: Independent Set Reconfiguration is one of the most well-studied problems in the setting of combinatorial reconfiguration. It is known that the problem is PSPACE-complete even for graphs of bounded bandwidth. This fact rules out the tractability of parameterizations by most well-studied structural parameters as most of them generalize bandwidth. In this paper, we study the parameterization by modular-width, which is not comparable with bandwidth. We show that the problem parameterized by modular-width is fixed-parameter tractable under all previously studied rules \(\mathsf{TAR}\), \(\mathsf{TJ}\), and \(\mathsf{TS}\). The result under \(\mathsf{TAR}\) resolves an open problem posed
by \textit{P. Bonsma} [J. Graph Theory 83, No. 2, 164--195 (2016; Zbl 1346.05209)].
For the entire collection see [Zbl 1425.68019].Conflict free version of covering problems on graphs: classical and parameterized.https://www.zbmath.org/1453.681352021-02-27T13:50:00+00:00"Jain, Pallavi"https://www.zbmath.org/authors/?q=ai:jain.pallavi"Kanesh, Lawqueen"https://www.zbmath.org/authors/?q=ai:kanesh.lawqueen"Misra, Pranabendu"https://www.zbmath.org/authors/?q=ai:misra.pranabenduSummary: Let \({\Pi}\) be a family of graphs. In the classical \({\Pi}\)-Vertex Deletion problem, given a graph \(G\) and a positive integer \(k\), the objective is to check whether there exists a subset \(S\) of at most \(k\) vertices such that \(G-S\) is in \({\Pi}\). In this paper, we introduce the conflict free version of this classical problem, namely Conflict Free \({\Pi}\)-Vertex Deletion (CF-\({\Pi}\)-VD), and study these problems from the viewpoint of classical and parameterized complexity. In the CF-\({\Pi}\)-VD problem, given two graphs \(G\) and \(H\) on the same vertex set and a positive integer \(k\), the objective is to determine whether there exists a set \(S\subseteq V(G)\), of size at most \(k\), such that \(G-S\) is in \({\Pi}\) and \(H[S]\) is edgeless. Initiating a systematic study of these problems is one of the main conceptual contribution of this work. We obtain several results on the conflict free version of several classical problems. Our first result shows that if \({\Pi}\) is characterized by a finite family of forbidden induced subgraphs then CF-\({\Pi}\)-VD is Fixed Parameter Tractable (FPT). Furthermore, we obtain improved algorithms for conflict free version of several well studied problems. Next, we show that if \({\Pi}\) is characterized by a ``well-behaved'' infinite family of forbidden induced subgraphs, then CF-\({\Pi}\)-VD is W[1]-hard. Motivated by this hardness result, we consider the parameterized complexity of CF-\({\Pi}\)-VD when \(H\) is restricted to well studied families of graphs. In particular, we show that the conflict free versions of several well-known problems such as Feedback Vertex Set, Odd Cycle Transversal, Chordal Vertex Deletion and Interval Vertex Deletion are FPT when \(H\) belongs to the families of \(d\)-degenerate graphs and nowhere dense graphs.
For the entire collection see [Zbl 1388.68008].Nearly optimal sparse polynomial multiplication.https://www.zbmath.org/1453.682242021-02-27T13:50:00+00:00"Nakos, Vasileios"https://www.zbmath.org/authors/?q=ai:nakos.vasileiosEditorial remark: No review copy delivered.Deconstructing generative adversarial networks.https://www.zbmath.org/1453.625182021-02-27T13:50:00+00:00"Zhu, Banghua"https://www.zbmath.org/authors/?q=ai:zhu.banghua"Jiao, Jiantao"https://www.zbmath.org/authors/?q=ai:jiao.jiantao"Tse, David"https://www.zbmath.org/authors/?q=ai:tse.david-n-cEditorial remark: No review copy delivered.Single-error detection and correction for duplication and substitution channels.https://www.zbmath.org/1453.940472021-02-27T13:50:00+00:00"Tang, Yuanyuan"https://www.zbmath.org/authors/?q=ai:tang.yuanyuan"Yehezkeally, Yonatan"https://www.zbmath.org/authors/?q=ai:yehezkeally.yonatan"Schwartz, Moshe"https://www.zbmath.org/authors/?q=ai:schwartz.moshe"Farnoud, Farzad"https://www.zbmath.org/authors/?q=ai:farnoud.farzadEditorial remark: No review copy delivered.A \((5.83+\epsilon)\)-approximation algorithm for universal facility location problem with linear penalties.https://www.zbmath.org/1453.900982021-02-27T13:50:00+00:00"Xu, Yicheng"https://www.zbmath.org/authors/?q=ai:xu.yicheng"Xu, Dachuan"https://www.zbmath.org/authors/?q=ai:xu.dachuan"Du, Donglei"https://www.zbmath.org/authors/?q=ai:du.donglei"Wu, Chenchen"https://www.zbmath.org/authors/?q=ai:wu.chenchenSummary: In the universal facility location problem, we are given a set of clients and facilities. Our goal is to find an assignment such that the total connection and facility cost is minimized. The connection cost is proportional to the distance between each client and its assigned facility, whereas the facility cost is a nondecreasing function with respect to the total number of clients assigned to the facility. The universal facility location problem generalizes several classical facility location problems, including the uncapacitated facility location problem and the capacitated facility location problem (both hard and soft capacities). This work considers the universal facility location problem with linear penalties, where each client can be rejected for service with a penalty. The objective is to minimize the total connection, facility and penalty cost. We present a \((5.83+\epsilon)\)-approximation local search algorithm for this problem.
For the entire collection see [Zbl 1326.68025].The capacity of private information retrieval from uncoded storage constrained databases.https://www.zbmath.org/1453.941302021-02-27T13:50:00+00:00"Attia, Mohamed Adel"https://www.zbmath.org/authors/?q=ai:attia.mohamed-adel"Kumar, Deepak"https://www.zbmath.org/authors/?q=ai:kumar.deepak"Tandon, Ravi"https://www.zbmath.org/authors/?q=ai:tandon.raviEditorial remark: No review copy delivered.Persistent homology of graph-like digital images.https://www.zbmath.org/1453.550052021-02-27T13:50:00+00:00"Ege, Ozgur"https://www.zbmath.org/authors/?q=ai:ege.ozgur"Karaca, Ismet"https://www.zbmath.org/authors/?q=ai:karaca.ismetThis is a paper in the area of graph-like digital images, a model of digital topology that views a digital image as a finite set of points having some adjacency relation. Simplicial homology groups have been defined in this setting by \textit{L. Boxer} et al. [J. Math. Sci. Adv. Appl. 11, No. 2, 109--140 (2011; Zbl 1276.55011)]. The authors go to lengths to situate their work in the setting of a ``digital simplicial complex'', but this is unnecessary since a digital image carries an intrinsic canonical simplicial structure.
The authors define persistent homology groups for digital images using adaptations of standard definitions, and prove some basic properties. They give specific computations in simple examples, and present ``Some applications to image processing'', which involve some less simple examples, leveraging computations in the existing literature.
Reviewer: Christopher Staecker (Fairfield)Introduction to the special issue on discovery science.https://www.zbmath.org/1453.680132021-02-27T13:50:00+00:00"Ceci, Michelangelo (ed.)"https://www.zbmath.org/authors/?q=ai:ceci.michelangelo"Calders, Toon (ed.)"https://www.zbmath.org/authors/?q=ai:calders.toonFrom the text: Welcome to the Discovery Science special issue. This issue contains both extended papers from the Discovery Science 2016 conference, held in Bari, Italy (19--21 October 2016), as well as new contributions solicited by an open call.Lower bounds for the happy coloring problems.https://www.zbmath.org/1453.681252021-02-27T13:50:00+00:00"Bliznets, Ivan"https://www.zbmath.org/authors/?q=ai:bliznets.ivan-a"Sagunov, Danil"https://www.zbmath.org/authors/?q=ai:sagunov.danilSummary: In this paper, we study the Maximum Happy Vertices and the Maximum Happy Edges problems (MHV and MHE for short). Very recently, the problems attracted a lot of attention and
were studied
in [\textit{A. Agrawal}, Lect. Notes Comput. Sci. 10765, 103--115 (2018; Zbl 06890111); \textit{N. R. Aravind} et al., Lect. Notes Comput. Sci. 9843, 281--292 (2016; Zbl 06631028); \textit{J. Choudhari} and \textit{I. V. Reddy}, Lect. Notes Comput. Sci. 10755, 228--239 (2018; Zbl 06889933); \textit{N. Misra} and \textit{I. V. Reddy}, Lect. Notes Comput. Sci. 10765, 142--153 (2018; Zbl 06890114)].
Main focus of our work is lower bounds on the computational complexity of these problems. Established lower bounds can be divided into the following groups: NP-hardness of the above guarantee parameterization, kernelization lower bounds (answering questions of
Misra and Reddy [loc. cit.]),
exponential lower bounds under the Set Cover Conjecture and the Exponential Time Hypothesis, and inapproximability results. Moreover, we present an \({\mathcal{O}}^*(\ell^k)\) randomized algorithm for MHV and an \({\mathcal{O}}^*(2^k)\) algorithm for MHE, where \(\ell\) is the number of colors used and \(k\) is the number of required happy vertices or edges. These algorithms cannot be improved to subexponential taking proved lower bounds into account.
For the entire collection see [Zbl 1416.68009].Guest editors' introduction to the special issue for the ECML PKDD 2018 journal track.https://www.zbmath.org/1453.680142021-02-27T13:50:00+00:00"Davis, Jesse (ed.)"https://www.zbmath.org/authors/?q=ai:davis.jesse"Bringmann, Bjørn (ed.)"https://www.zbmath.org/authors/?q=ai:bringmann.bjorn"Fromont, Elisa (ed.)"https://www.zbmath.org/authors/?q=ai:fromont.elisa"Greene, Derek (ed.)"https://www.zbmath.org/authors/?q=ai:greene.derekFrom the text: This special issue contains papers accepted for the Machine Learning Journal as part of the ECML PKDD 2018 journal track. The journal track began at ECML PKDD 2013 with the goal of offering authors the opportunity to have a journal publication while still being able to present their work during a conference talk. Through the track, authors have the opportunity to submit a paper to either the Machine Learning Journal or the Data Mining and Knowledge Discovery journal that, if accepted, would appear in a special issue of the respective journal. Authors of accepted journal track articles present their work at the ECML PKDD 2018 conference which takes place in Dublin, Ireland from September 10 through 14, 2018.Logic and rational languages of scattered and countable series-parallel posets.https://www.zbmath.org/1453.030372021-02-27T13:50:00+00:00"Amrane, Amazigh"https://www.zbmath.org/authors/?q=ai:amrane.amazigh"Bedon, Nicolas"https://www.zbmath.org/authors/?q=ai:bedon.nicolasThis paper gives a characterization in terms of logical expressions for some specific partial order indexed rational languages.
More precisely, the authors consider N-free posets, which are models of concurrent programs equivalent to the series-parallel digraphs of [\textit{J. Valdes} et al., SIAM J. Comput. 11, 298--313 (1982; Zbl 0478.68065)]. Generalizing the notion of finite word, the paper considers mappings from countable N-free posets, whose chains are scattered (i.e. have no dense sub-ordering) and whose antichain are finite, into finite alphabets. In this setting, the notion of rational language introduced in [\textit{N. Bedon} and \textit{C. Rispal}, Theor. Comput. Sci. 412, No. 22, 2356--2369 (2011; Zbl 1238.68072)] is shown to be equivalent to definability in the logic P-MSO of [\textit{N. Bedon}, Log. Methods Comput. Sci. 11, No. 4, Paper No. 2, 38 p. (2015; Zbl 1448.68255)]. P-MSO combines monadic second-order logic with Presburger arithmetic, which is the theory of natural numbers under addition.
Since the transformation from P-MSO to rational expressions follows well-known techniques, the paper focus on the converse transformation. The proof proceeds by first constructing, from a rational expression, a more specific one called a \(>1\)-expression, and from this \(>1\)-expression a graph, called a \(D\)-graph, before constructing the final P-MSO formula.
The paper is the extended version, with proofs, of [the authors, Lect. Notes Comput. Sci. 11417, 275--287 (2019; Zbl 1425.03013)].
Reviewer: Roger Villemaire (Montréal)Fruit fly optimisation algorithm with adaptive sign processing.https://www.zbmath.org/1453.901952021-02-27T13:50:00+00:00"Guo, Xiaodong"https://www.zbmath.org/authors/?q=ai:guo.xiaodong"Zhang, Xueliang"https://www.zbmath.org/authors/?q=ai:zheng.xueliang"Wang, Lifang"https://www.zbmath.org/authors/?q=ai:wang.lifangSummary: Inspired by probability estimation for code-words in adaptive arithmetic coding, a fruit fly optimisation algorithm with adaptive sign processing is presented to solve a class of optimisation problems with negative optimum point. Sign probability of previously optimal solutions are dynamically calculated, and signs of smell concentration judgement value are adaptively created, then drosophila swarms in this way will search more fast and stationary than the ones gained in random manner. Simulation results for test functions show the adaptive manner in sign processing is efficient and potential for fruit fly optimisation algorithm.On functions derived from fuzzy implications.https://www.zbmath.org/1453.681812021-02-27T13:50:00+00:00"Grzegorzewski, Przemysław"https://www.zbmath.org/authors/?q=ai:grzegorzewski.przemyslawSummary: Recently, fuzzy implications based on copulas, i.e. probabilistic implications and probabilistic S-implications, were introduced and their properties were explored. However, the reverse problem of copulas derived from fuzzy implications, suggested by
\textit{S. Massanet} et al. [``Defining copulas from fuzzy implication functions'', in: Proceedings of the 8th international summer school on aggregation operators, AGOP'2015. Katowice: University of Silesia (2015); ``On two construction methods of copulas from fuzzy implication functions'', Prog. Artif. Intell. 5, 1--4 (2016; \url{doi:10.1007/s13748-015-0069-6})],
is also of interest. In the paper we consider geometric properties of those fuzzy implications that generate copulas. Moreover, we consider the reverse problem for some generalizations of copulas like quasi-copulas and semi-copulas.
For the entire collection see [Zbl 1385.68005].Sensitivity, affine transforms and quantum communication complexity.https://www.zbmath.org/1453.680832021-02-27T13:50:00+00:00"Dinesh, Krishnamoorthy"https://www.zbmath.org/authors/?q=ai:dinesh.krishnamoorthy"Sarma, Jayalal"https://www.zbmath.org/authors/?q=ai:sarma-m-n.jayalalSummary: In this paper, we study the Boolean function parameters sensitivity \((\mathsf{s})\), block sensitivity \((\mathsf{bs})\), and alternation \((\mathsf{alt})\) under specially designed affine transforms and show several applications. For a function \(f:{\mathbb{F}}_2^n \rightarrow\{0,1\}\), and \(A=Mx+b\) for \(M\in{\mathbb{F}}_2^{n\times n}\) and \(b\in{\mathbb{F}}_2^n\), the result of the transformation \(g\) is defined as \(\forall x\in{\mathbb{F}}_2^n,g(x)=f(Mx+b)\).
As a warm up, we study alternation under linear shifts (when \(M\) is restricted to be the identity matrix) called the shift invariant alternation (the smallest alternation that can be achieved for the Boolean function \(f\) by shifts, denoted by \({\mathsf{salt}}(f))\).
By a result of \textit{C. Lin} and \textit{S. Zhang} [LIPIcs -- Leibniz Int. Proc. Inform. 80, Article 51, 13 p. (2017; Zbl 1441.68053)],
it follows that \(\mathsf{bs}(f) \le O({\mathsf{salt}}(f)^2{\mathsf{s}}(f))\). Thus, to settle the Sensitivity Conjecture \((\forall ~f,{\mathsf{bs}}(f)\le{\mathsf{poly}}({\mathsf{s}}(f)))\), it suffices to argue that \(\forall ~f, {\mathsf{salt}}(f) \le{\mathsf{poly}}({\mathsf{s}}(f))\). However, we exhibit an explicit family of Boolean functions for which \({\mathsf{salt}}(f)\) is \(2^{\varOmega({\mathsf{s}}(f))}\).
Going further, we use an affine transform \(A\), such that the corresponding function \(g\) satisfies \({\mathsf{bs}}(f,0^n)\le{\mathsf{s}}(g)\), to prove that for \(F(x,y) {\mathop{=}\limits^{\mathrm{def}}}f(x \wedge y)\), the bounded error quantum communication complexity of \(F\) with prior entanglement, \(Q^*_{1/3}(F)\) is \(\varOmega(\sqrt{{\mathsf{bs}}(f,0^n)})\). Our proof builds on ideas from
[\textit{A. A. Sherstov}, Quantum Inf. Comput. 10, No. 5--6, 435--455 (2010; Zbl 1237.81040)]
where we use specific properties of the above affine transformation. Using this, we show the following.
\begin{itemize}
\item [(a)] For a fixed prime \(p\) and an \(\epsilon,0<\epsilon<1\), any Boolean function \(f\) that depends on all its inputs with \({\mathsf{deg}}_p(f)\le(1-\epsilon)\log n\) must satisfy \(Q^*_{1/3}(F)=\varOmega\left(\frac{n^{\epsilon/2}}{\log n}\right)\). Here, \({\mathsf{deg}}_p(f)\) denotes the degree of the multilinear polynomial over \({\mathbb{F}}_p\) which agrees with \(f\) on Boolean inputs.
\item [(b)] For Boolean function \(f\) such that there exists primes \(p\) and \(q\) with \({\mathsf{deg}}_q(f)\ge\varOmega({\mathsf{deg}}_p(f)^\delta)\) for \(\delta>2\), the deterministic communication complexity -- \({\mathsf{D}}(F)\) and \(Q^*_{1/3}(F)\) are polynomially related. In particular, this holds when \({\mathsf{deg}}_p(f)=O(1)\). Thus, for this class of functions, this answers an open question
(see [\textit{H. Buhrmann} and \textit {R. de Wolf}, ``Communication complexity lower bounds by polynomials'', in: Proceedings 16th annual IEEE conference on computational complexity, CCC'01. Los Alamitos, CA: IEEE Computer Society. 120--130 (2001; \url{doi:10.1109/CCC.2001.933879})])
about the relation between the two measures.
\end{itemize}
Restricting back to the linear setting, we construct linear transformation \(A\), such that the corresponding function \(g\) satisfies, \({\mathsf{alt}}(f)\le 2{\mathsf{s}}(g)+1\). Using this new relation, we exhibit Boolean functions \(f\) (other than the parity function) such that \({\mathsf{s}}(f)\) is \(\varOmega (\sqrt{\mathsf{sparsity}(f)})\) where \(\mathsf{sparsity}(f)\) is the number of non-zero coefficients in the Fourier representation of \(f\).
For the entire collection see [Zbl 1416.68009].The complexity of tree partitioning.https://www.zbmath.org/1453.681192021-02-27T13:50:00+00:00"An, Zhao"https://www.zbmath.org/authors/?q=ai:an.zhao"Feng, Qilong"https://www.zbmath.org/authors/?q=ai:feng.qilong"Kanj, Iyad"https://www.zbmath.org/authors/?q=ai:kanj.iyad-a"Xia, Ge"https://www.zbmath.org/authors/?q=ai:xia.geSummary: Given a tree \(T\) on \(n\) vertices, and \(k, b, s_1, \ldots , s_b \in \mathbb {N}\), the Tree Partitioning problem asks if at most \(k\) edges can be removed from \(T\) so that the resulting components can be grouped into \(b\) groups such that the number of vertices in group \(i\) is \(s_i\), for \(i =1, \ldots , b\). The case when \(s_1=\cdots =s_b =n/b\), referred to as the Balanced Tree Partitioning problem, was shown to be \(\mathcal {NP}\)-complete for trees of maximum degree at most 5, and the complexity of the problem for trees of maximum degree 4 and 3 was posed as an open question. The parameterized complexity of Balanced Tree Partitioning was also posed as an open question in another work.{
}In this paper, we answer both open questions negatively. We show that Balanced Tree Partitioning (and hence, Tree Partitioning) is \(\mathcal {NP}\)-complete for trees of maximum degree 3, thus closing the door on the complexity of Balanced Tree Partitioning, as the simple case when \(T\) is a path is in \(\mathcal P\). In terms of the parameterized complexity of the problems, we show that both Balanced Tree Partitioning and Tree Partitioning are W[1]-complete. Finally, using a compact representation of the solution space for an instance of the problem, we present a dynamic programming algorithm for Tree Partitioning (and hence, for Balanced Tree Partitioning) that runs in subexponential-time \(2^{O(\sqrt{n})}\), adding a natural problem to the list of problems that can be solved in subexponential time.
For the entire collection see [Zbl 1369.68023].Certain methods of constructing controls for quantum systems.https://www.zbmath.org/1453.810342021-02-27T13:50:00+00:00"Pechen', A. N."https://www.zbmath.org/authors/?q=ai:pechen.aleksandr-nikolaevich|pechen.alexander-nikolaevichSummary: Recently, various control problems for quantum systems, such as individual atoms, molecules, and electron states at quantum points, have been actively researched. In this paper, we briefly discuss methods of constructing controls for quantum system by means of gradient algorithms, genetic algorithms, and the speed gradient. The violation of the asymptotic stabilizability condition for the problem of generation of unitary operations is proved by the speed gradient method in two-level quantum systems.Optimal online and offline algorithms for robot-assisted restoration of barrier coverage.https://www.zbmath.org/1453.681942021-02-27T13:50:00+00:00"Czyzowicz, J."https://www.zbmath.org/authors/?q=ai:czyzowicz.jurek"Kranakis, E."https://www.zbmath.org/authors/?q=ai:kranakis.evangelos"Krizanc, D."https://www.zbmath.org/authors/?q=ai:krizanc.danny"Narayanan, L."https://www.zbmath.org/authors/?q=ai:narayanan.lata"Opatrny, J."https://www.zbmath.org/authors/?q=ai:opatrny.jarda|opatrny.jaroslavSummary: Assume that \(n\) wireless sensors are initially arbitrarily distributed on a line segment barrier. Each sensor is said to cover the portion of the barrier that intersects with its sensing area. Owing to incorrect initial position, or the death of some of the sensors, the barrier is not completely covered by the sensors. We employ a mobile robot to move the sensors to final positions on the barrier such that barrier coverage is guaranteed. We seek algorithms that minimize the length of the robot's trajectory, since this allows the restoration of barrier coverage as soon as possible. We give an optimal linear-time offline algorithm that gives a minimum-length trajectory for a robot that starts at one end of the barrier and achieves the restoration of barrier coverage. We also study two different online models: one in which the online robot does not know the length of the barrier in advance, and the other in which the online robot knows the length of the barrier. For the case when the online robot does not know the length of the barrier, we prove a tight bound of \(3 /2\) on the competitive ratio, and we give a tight lower bound of \(5 /4\) on the competitive ratio in the other case.Online submodular maximization with preemption.https://www.zbmath.org/1453.682162021-02-27T13:50:00+00:00"Buchbinder, Niv"https://www.zbmath.org/authors/?q=ai:buchbinder.niv"Feldman, Moran"https://www.zbmath.org/authors/?q=ai:feldman.moran"Schwartz, Roy"https://www.zbmath.org/authors/?q=ai:schwartz.royBoosted multivariate trees for longitudinal data.https://www.zbmath.org/1453.681562021-02-27T13:50:00+00:00"Pande, Amol"https://www.zbmath.org/authors/?q=ai:pande.amol"Li, Liang"https://www.zbmath.org/authors/?q=ai:li.liang"Rajeswaran, Jeevanantham"https://www.zbmath.org/authors/?q=ai:rajeswaran.jeevanantham"Ehrlinger, John"https://www.zbmath.org/authors/?q=ai:ehrlinger.john"Kogalur, Udaya B."https://www.zbmath.org/authors/?q=ai:kogalur.udaya-b"Blackstone, Eugene H."https://www.zbmath.org/authors/?q=ai:blackstone.eugene-h"Ishwaran, Hemant"https://www.zbmath.org/authors/?q=ai:ishwaran.hemantSummary: Machine learning methods provide a powerful approach for analyzing longitudinal data in which repeated measurements are observed for a subject over time. We boost multivariate trees to fit a novel flexible semi-nonparametric marginal model for longitudinal data. In this model, features are assumed to be nonparametric, while feature-time interactions are modeled semi-nonparametrically utilizing \(P\)-splines with estimated smoothing parameter. In order to avoid overfitting, we describe a relatively simple in sample cross-validation method which can be used to estimate the optimal boosting iteration and which has the surprising added benefit of stabilizing certain parameter estimates. Our new multivariate tree boosting method is shown to be highly flexible, robust to covariance misspecification and unbalanced designs, and resistant to overfitting in high dimensions. Feature selection can be used to identify important features and feature-time interactions. An application to longitudinal data of
forced 1-second lung expiratory volume (FEV1) for lung transplant patients identifies an important feature-time interaction and illustrates the ease with which our method can find complex relationships in longitudinal data.An empirical study of on-line models for relational data streams.https://www.zbmath.org/1453.681582021-02-27T13:50:00+00:00"Srinivasan, Ashwin"https://www.zbmath.org/authors/?q=ai:srinivasan.ashwin"Bain, Michael"https://www.zbmath.org/authors/?q=ai:bain.michael-eSummary: To date, Inductive Logic Programming (ILP) systems have largely assumed that all data needed for learning have been provided at the onset of model construction. Increasingly, for application areas like telecommunications, astronomy, text processing, financial markets and biology, machine-generated data are being generated continuously and on a vast scale. We see at least four kinds of problems that this presents for ILP: (1) it may not be possible to store all of the data, even in secondary memory; (2) even if it were possible to store the data, it may be impractical to construct an acceptable model using partitioning techniques that repeatedly perform expensive coverage or subsumption-tests on the data; (3) models constructed at some point may become less effective, or even invalid, as more data become available (exemplified by the ``drift'' problem when identifying concepts); and (4) the representation of the data instances may need to change as more data become available (a kind of ``language drift'' problem). In this paper, we investigate the adoption of a stream-based on-line learning approach to relational data. Specifically, we examine the representation of relational data in both an infinite-attribute setting, and in the usual fixed-attribute setting, and develop implementations that use ILP engines in combination with on-line model-constructors. The behaviour of each program is investigated using a set of controlled experiments, and performance in practical settings is demonstrated by constructing complete theories for some of the largest biochemical datasets examined by ILP systems to date, including one with a million examples; to the best of our knowledge, the first time this has been empirically demonstrated with ILP on a real-world data set.On the use of stochastic local search techniques to revise first-order logic theories from examples.https://www.zbmath.org/1453.681722021-02-27T13:50:00+00:00"Paes, Aline"https://www.zbmath.org/authors/?q=ai:paes.aline"Zaverucha, Gerson"https://www.zbmath.org/authors/?q=ai:zaverucha.gerson"Santos Costa, Vítor"https://www.zbmath.org/authors/?q=ai:costa.vitor-santosSummary: Theory Revision from Examples is the process of repairing incorrect theories and/or improving incomplete theories from a set of examples. This process usually results in more accurate and comprehensible theories than purely inductive learning. However, so far, progress on the use of theory revision techniques has been limited by the large search space they yield. In this article, we argue that it is possible to reduce the search space of a theory revision system by introducing stochastic local search. More precisely, we introduce a number of stochastic local search components at the key steps of the revision process, and implement them on a state-of-the-art revision system that makes use of the most specific clause to constrain the search space. We show that with the use of these SLS techniques it is possible for the revision system to be executed in a feasible time, while still improving the initial theory and in a number of cases even reaching better accuracies than the deterministic revision
process. Moreover, in some cases the revision process can be faster and still achieve better accuracies than an ILP system learning from an empty initial hypothesis or assuming an initial theory to be correct.Fast rates by transferring from auxiliary hypotheses.https://www.zbmath.org/1453.681532021-02-27T13:50:00+00:00"Kuzborskij, Ilja"https://www.zbmath.org/authors/?q=ai:kuzborskij.ilja"Orabona, Francesco"https://www.zbmath.org/authors/?q=ai:orabona.francescoSummary: In this work we consider the learning setting where, in addition to the training set, the learner receives a collection of auxiliary hypotheses originating from other tasks. We focus on a broad class of ERM-based linear algorithms that can be instantiated with any non-negative smooth loss function and any strongly convex regularizer. We establish generalization and excess risk bounds, showing that, if the algorithm is fed with a good combination of source hypotheses, generalization happens at the fast rate \(\mathcal {O}(1/m)\) instead of the usual \(\mathcal {O}(1/\sqrt{m})\). On the other hand, if the source hypotheses combination is a misfit for the target task, we recover the usual learning rate. As a byproduct of our study, we also prove a new bound on the Rademacher complexity of the smooth loss class under weaker assumptions compared to previous works.A comparative analysis of optimization and generalization properties of two-layer neural network and random feature models under gradient descent dynamics.https://www.zbmath.org/1453.681632021-02-27T13:50:00+00:00"E, Weinan"https://www.zbmath.org/authors/?q=ai:e.weinan"Ma, Chao"https://www.zbmath.org/authors/?q=ai:ma.chao"Wu, Lei"https://www.zbmath.org/authors/?q=ai:wu.lei.2|wu.lei.1|wu.lei.3|wu.lei.4Summary: A fairly comprehensive analysis is presented for the gradient descent dynamics for training two-layer neural network models in the situation when the parameters in both layers are updated. General initialization schemes as well as general regimes for the network width and training data size are considered. In the over-parametrized regime, it is shown that gradient descent dynamics can achieve zero training loss exponentially fast regardless of the quality of the labels. In addition, it is proved that throughout the training process the functions represented by the neural network model are uniformly close to that of a kernel method. For general values of the network width and training data size, sharp estimates of the generalization error is established for target functions in the appropriate reproducing kernel Hilbert space.A blind coupon mechanism enabling veto voting over unreliable networks.https://www.zbmath.org/1453.680242021-02-27T13:50:00+00:00"Boyd, Colin"https://www.zbmath.org/authors/?q=ai:boyd.colin-a"Gjøsteen, Kristian"https://www.zbmath.org/authors/?q=ai:gjosteen.kristian"Gritti, Clémentine"https://www.zbmath.org/authors/?q=ai:gritti.clementine"Haines, Thomas"https://www.zbmath.org/authors/?q=ai:haines.thomas-jSummary: A blind coupon mechanism (BCM) allows spreading of alerts quietly and quickly over unreliable networks. The BCM primitive ensures that alerts are efficiently broadcast while the nature of these signals are securely hidden. However, current BCM proposals are limited to indicating a single bit of information and also lack natural privacy properties. In this paper, we develop a new BCM solution that allows the transmission of several alerts privately and in one shot. This extension perfectly suits a new kind of applications, that is (absolute) veto voting over unreliable networks, in which multiple decisions are reached using only simple peer-to-peer communications. Our enhanced BCM efficiently supports the spread of votes over unreliable networks while hiding whether these votes contain any or several vetoes. We prove our BCM solution secure and illustrate its use for veto voting protocols in limited communication infrastructures.
For the entire collection see [Zbl 1428.94012].MArBLed circuits: mixing arithmetic and Boolean circuits with active security.https://www.zbmath.org/1453.680822021-02-27T13:50:00+00:00"Rotaru, Dragos"https://www.zbmath.org/authors/?q=ai:rotaru.dragos"Wood, Tim"https://www.zbmath.org/authors/?q=ai:wood.timSummary: Most modern actively-secure multiparty computation (MPC) protocols involve generating random data that is secret-shared and authenticated, and using it to evaluate arithmetic or Boolean circuits in different ways. In this work we present a generic method for converting authenticated secret-shared data between different fields, and show how to use it to evaluate so-called ``mixed'' circuits with active security and in the full-threshold setting. A mixed circuit is one in which parties switch between different subprotocols dynamically as computation proceeds, the idea being that some protocols are more efficient for evaluating arithmetic circuits, and others for Boolean circuits.
One use case of our switching mechanism is for converting between secret-sharing-based MPC and garbled circuits (GCs). The former is more suited to the evaluation of arithmetic circuits and can easily be used to emulate arithmetic over the integers, whereas the latter is better for Boolean circuits and has constant round complexity. Much work already exists in the two-party semi-honest setting, but the \(n\)-party dishonest majority case was hitherto neglected.
We call the actively-secure mixed arithmetic/Boolean circuit a marbled circuit. Our implementation showed that mixing rotocols in this way allows us to evaluate a linear Support Vector Machine with 400 times fewer AND gates than a solution using GC alone albeit with twice the preprocessing required using only SPDZ
[\textit{I. Damgård} et al., Lect. Notes Comput. Sci. 7417, 643--662 (2012; Zbl 1296.94104)],
and thus our solution offers a tradeoff between online and preprocessing complexity. When evaluating over a WAN network, our online phase is 10 times faster than the plain SPDZ protocol.
For the entire collection see [Zbl 1428.94012].Annealing and replica-symmetry in deep Boltzmann machines.https://www.zbmath.org/1453.820872021-02-27T13:50:00+00:00"Alberici, Diego"https://www.zbmath.org/authors/?q=ai:alberici.diego"Barra, Adriano"https://www.zbmath.org/authors/?q=ai:barra.adriano"Contucci, Pierluigi"https://www.zbmath.org/authors/?q=ai:contucci.pierluigi"Mingione, Emanuele"https://www.zbmath.org/authors/?q=ai:mingione.emanueleSummary: In this paper we study the properties of the quenched pressure of a multi-layer spin-glass model (a deep Boltzmann Machine in artificial intelligence jargon) whose pairwise interactions are allowed between spins lying in adjacent layers and not inside the same layer nor among layers at distance larger than one. We prove a theorem that bounds the quenched pressure of such a K-layer machine in terms of K Sherrington-Kirkpatrick spin glasses and use it to investigate its annealed region. The replica-symmetric approximation of the quenched pressure is identified and its relation to the annealed one is considered. The paper also presents some observation on the model's architectural structure related to machine learning. Since escaping the annealed region is mandatory for a meaningful training, by squeezing such region we obtain thermodynamical constraints on the form factors. Remarkably, its optimal escape is achieved by requiring the last layer to scale sub-linearly in the network size.Stabbing circles for sets of segments in the plane.https://www.zbmath.org/1453.682032021-02-27T13:50:00+00:00"Claverol, Mercè"https://www.zbmath.org/authors/?q=ai:claverol.merce"Khramtcova, Elena"https://www.zbmath.org/authors/?q=ai:khramtcova.elena"Papadopoulou, Evanthia"https://www.zbmath.org/authors/?q=ai:papadopoulou.evanthia"Saumell, Maria"https://www.zbmath.org/authors/?q=ai:saumell.maria"Seara, Carlos"https://www.zbmath.org/authors/?q=ai:seara.carlosSummary: Stabbing a set \(S\) of \(n\) segments in the plane by a line is a well-known problem. In this paper we consider the variation where the stabbing object is a circle instead of a line. We show that the problem is tightly connected to two cluster Voronoi diagrams, in particular, the Hausdorff and the farthest-color Voronoi diagram. Based on these diagrams, we provide a method to compute a representation of all the combinatorially different stabbing circles for \(S\), and the stabbing circles with maximum and minimum radius. We give conditions under which our method is fast. These conditions are satisfied if the segments in \(S\) are parallel, resulting in a \(O(n \log^2{n})\) time and \(O(n)\) space algorithm. We also observe that the stabbing circle problem for \(S\) can be solved in worst-case optimal \(O(n^2)\) time and space by reducing the problem to computing the stabbing planes for a set of segments in 3D. Finally we show that the problem of computing the stabbing circle of minimum radius for a set of
\(n\) parallel segments of equal length has an \(\varOmega (n \log n)\) lower bound.Presentation of the book The Gröbner cover.https://www.zbmath.org/1453.130832021-02-27T13:50:00+00:00"Montes, Antonio"https://www.zbmath.org/authors/?q=ai:montes.antonioSummary: The aim of this paper is to present the research book [the author, The Gröbner cover. Cham: Springer (2018; Zbl 1412.13001)] recently published. This book is divided into two parts, one theoretical and one focusing on applications, and offers a complete description of the Canonical Gröbner Cover, to the author's best knowledge, the most accurate algebraic method for discussing parametric polynomial systems. It also includes applications to the automatic deduction of geometric theorems, loci computation and envelopes. The theoretical part is a self-contained exposition on the theory of Parametric Gröbner Systems and Bases. It begins with Weispfenning introduction of Comprehensive Gröbner Systems (CGS), a fundamental contribution made in 1992, and provides a complete description of the Canonical Gröbner Cover (GC), which includes a canonical discussion of a set of parametric polynomial equations developed in [the author and \textit{M. Wibmer}, J. Symb. Comput. 45, No. 12, 1391--1425 (2010; Zbl 1207.13018)]. In turn, the application part selects three problems for which the Gröbner Cover offers valuable new perspectives. The automatic deduction of geometric theorems (ADGT) becomes fully automatic and straightforward using GC, representing a major improvement on all previous methods. In terms of loci and envelope computation, GC makes it possible to introduce a taxonomy of the components and automatically compute it. The book also generalizes the definition of the envelope of a family of hyper-surfaces, and provides algorithms for its computation, as well as for discussing how to determine the real envelope. All the algorithms described in the book have also been included in the Singular software library grobcov.lib implemented by the author and H. Schönemann, the book serving also as User Manual for the library.Optimal online and offline algorithms for robot-assisted restoration of barrier coverage.https://www.zbmath.org/1453.681932021-02-27T13:50:00+00:00"Czyzowicz, J."https://www.zbmath.org/authors/?q=ai:czyzowicz.jurek"Kranakis, E."https://www.zbmath.org/authors/?q=ai:kranakis.evangelos"Krizanc, D."https://www.zbmath.org/authors/?q=ai:krizanc.danny"Narayanan, L."https://www.zbmath.org/authors/?q=ai:narayanan.lata"Opatrny, J."https://www.zbmath.org/authors/?q=ai:opatrny.jarda|opatrny.jaroslavSummary: Cooperation between mobile robots and wireless sensor networks is a line of research that is currently attracting a lot of attention. In this context, we study the following problem of barrier coverage by stationary wireless sensors that are assisted by a mobile robot with the capacity to move sensors. Assume that \(n\) sensors are initially arbitrarily distributed on a line segment barrier. Each sensor is said to cover the portion of the barrier that intersects with its sensing area. Owing to incorrect initial position, or the death of some of the sensors, the barrier is not completely covered by the sensors. We employ a mobile robot to move the sensors to final positions on the barrier such that barrier coverage is guaranteed. We seek algorithms that minimize the length of the robot's trajectory, since this allows the restoration of barrier coverage as soon as possible. We give an optimal linear-time offline algorithm that gives a minimum-length trajectory for a robot that starts at one end of the barrier and achieves the restoration of barrier coverage. We also study two different online models: one in which the online robot does not know the length of the barrier in advance, and the other in which the online robot knows it in advance. For the case when the online robot does not know the length of the barrier, we prove a tight bound of \(3/2\) on the competitive ratio, and we give a tight lower bound of \(5/4\) on the competitive ratio in the other case. Thus for each case we give an optimal online algorithm.
For the entire collection see [Zbl 1317.68008].Opposition-based quantum firework algorithm for continuous optimisation problems.https://www.zbmath.org/1453.681712021-02-27T13:50:00+00:00"Gao, Hongyuan"https://www.zbmath.org/authors/?q=ai:gao.hongyuan"Li, Chenwan"https://www.zbmath.org/authors/?q=ai:li.chenwanSummary: A novel intelligence algorithm for continuous optimisation problem is proposed in this paper, termed as opposition-based quantum firework algorithm (OQFA). The proposed OQFA combines fireworks algorithm (FA) and two improved operators: opposition-based learning and quantum computing theory. The opposition-based learning operator can accelerate the convergence rate of algorithm by retaining the better solution, and the quantum computing theory can ameliorate the capability of searching and enhance the exploration efficiency of the solution space. Since OQFA has the features of both opposition-based learning and quantum computing, it has a high possibility to find a global optimum and avoids premature convergence. Experimental results on five test functions show that OQFA outperforms cultural algorithm (CA), particle swarm optimisation (PSO) and FA in terms of convergence rate and convergence accuracy.Inductive logic programming. 28th international conference, ILP 2018, Ferrara, Italy, September 2--4, 2018. Proceedings.https://www.zbmath.org/1453.680192021-02-27T13:50:00+00:00"Riguzzi, Fabrizio (ed.)"https://www.zbmath.org/authors/?q=ai:riguzzi.fabrizio"Bellodi, Elena (ed.)"https://www.zbmath.org/authors/?q=ai:bellodi.elena"Zese, Riccardo (ed.)"https://www.zbmath.org/authors/?q=ai:zese.riccardoThe articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1453.68016].
Indexed articles:
\textit{Cropper, Andrew; Tourret, Sophie}, Derivation reduction of metarules in meta-interpretive learning, 1-21 [Zbl 07307743]
\textit{Legras, Swann; Rouveirol, Céline; Ventos, Véronique}, The game of Bridge: a challenge for ILP, 72-87 [Zbl 07307747]
\textit{Nickles, Matthias}, Sampling-based SAT/ASP multi-model optimization as a framework for probabilistic inference, 88-104 [Zbl 07307748]
\textit{Ribeiro, Tony; Folschette, Maxime; Magnin, Morgan; Roux, Olivier; Inoue, Katsumi}, Learning dynamics with synchronous, asynchronous and general semantics, 118-140 [Zbl 07307750]
\textit{Siebers, Michael; Schmid, Ute}, Was the year 2000 a leap year? Step-wise narrowing theories with Metagol, 141-156 [Zbl 07307751]
\textit{Škrlj, Blaž; Kralj, Jan; Lavrač, Nada}, Targeted end-to-end knowledge graph decomposition, 157-171 [Zbl 07307752]Recent advances in domain decomposition methods for total variation minimization.https://www.zbmath.org/1453.654292021-02-27T13:50:00+00:00"Lee, Chang-Ock"https://www.zbmath.org/authors/?q=ai:lee.chang-ock"Park, Jongho"https://www.zbmath.org/authors/?q=ai:park.jonghoSummary: Total variation minimization is standard in mathematical imaging and there have been numerous researches over the last decades. In order to process large-scale images in real-time, it is essential to design parallel algorithms that utilize distributed memory computers efficiently. The aim of this paper is to illustrate recent advances of domain decomposition methods for total variation minimization as parallel algorithms. Domain decomposition methods are suitable for parallel computation since they solve a large-scale problem by dividing it into smaller problems and treating them in parallel, and they already have been widely used in structural mechanics. Differently from problems arising in structural mechanics, energy functionals of total variation minimization problems are in general nonlinear, nonsmooth, and nonseparable. Hence, designing efficient domain decomposition methods for total variation minimization is a quite challenging issue. We describe various existing approaches on domain decomposition methods for total variation minimization in a unified view. We address how the direction of research on the subject has changed over the past few years, and suggest several interesting topics for further research.Computer aided verification. 32nd international conference, CAV 2020, Los Angeles, CA, USA, July 21--24, 2020. Proceedings. Part II.https://www.zbmath.org/1453.680182021-02-27T13:50:00+00:00"Lahiri, Shuvendu Kumar (ed.)"https://www.zbmath.org/authors/?q=ai:lahiri.shuvendu-kumar"Wang, Chao (ed.)"https://www.zbmath.org/authors/?q=ai:wang.chao.3|wang.chao.1|wang.chao|wang.chao.2The articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1416.68007; Zbl 1416.68008]. For Part I of the proceedings of the present conference see [Zbl 1453.68017].Computer aided verification. 32nd international conference, CAV 2020, Los Angeles, CA, USA, July 21--24, 2020. Proceedings. Part I.https://www.zbmath.org/1453.680172021-02-27T13:50:00+00:00"Lahiri, Shuvendu Kumar (ed.)"https://www.zbmath.org/authors/?q=ai:lahiri.shuvendu-kumar"Wang, Chao (ed.)"https://www.zbmath.org/authors/?q=ai:wang.chao.3|wang.chao.2|wang.chao.1|wang.chaoThe articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1416.68007; Zbl 1416.68008]. For Part II of the proceedings of the present conference see [Zbl 1453.68018].
Indexed articles:
\textit{Elboher, Yizhak Yisrael; Gottschlich, Justin; Katz, Guy}, An abstraction-based framework for neural network verification, 43-65 [Zbl 07306404]
\textit{Bak, Stanley; Tran, Hoang-Dung; Hobbs, Kerianne; Johnson, Taylor T.}, Improved geometric path enumeration for verifying ReLU neural networks, 66-96 [Zbl 07306405]
\textit{Xu, Dong; Shriver, David; Dwyer, Matthew B.; Elbaum, Sebastian}, Systematic generation of diverse benchmarks for DNN verification, 97-121 [Zbl 07306406]
\textit{Albert, Elvira; Gordillo, Pablo; Rubio, Albert; Schett, Maria A.}, Synthesis of super-optimized smart contracts using Max-SMT, 177-200 [Zbl 07306411]
\textit{Sahai, Shubham; Subramanyan, Pramod; Sinha, Rohit}, Verification of quantitative hyperproperties using trace enumeration relations, 201-224 [Zbl 07306412]
\textit{Nemati, Hamed; Buiras, Pablo; Lindner, Andreas; Guanciale, Roberto; Jacobs, Swen}, Validation of abstract side-channel models for computer architectures, 225-248 [Zbl 07306413]
\textit{Nagar, Kartik; Mukherjee, Prasita; Jagannathan, Suresh}, Semantics, specification, and bounded verification of concurrent libraries in replicated systems, 251-274 [Zbl 07306414]
\textit{Kragl, Bernhard; Qadeer, Shaz; Henzinger, Thomas A.}, Refinement for structured concurrent programs, 275-298 [Zbl 07306415]
\textit{Jaber, Nouraldin; Jacobs, Swen; Wagner, Christopher; Kulkarni, Milind; Samanta, Roopsha}, Parameterized verification of systems with global synchronization and guards, 299-323 [Zbl 07306416]
\textit{Li, Xiao; Houshmand, Farzin; Lesani, Mohsen}, \textsc{Hampa}: solver-aided recency-aware replication, 324-349 [Zbl 07306417]
\textit{Çirisci, Berk; Enea, Constantin; Farzan, Azadeh; Mutluergil, Suha Orhun}, Root causing linearizability violations, 350-375 [Zbl 07306418]
\textit{Schemmel, Daniel; Büning, Julian; Rodríguez, César; Laprell, David; Wehrle, Klaus}, Symbolic partial-order execution for testing multi-threaded programs, 376-400 [Zbl 07306419]
\textit{Gan, Ting; Xia, Bican; Xue, Bai; Zhan, Naijun; Dai, Liyun}, Nonlinear Craig interpolant generation, 415-438 [Zbl 07306421]
\textit{Bendík, Jaroslav; Meel, Kuldeep S.}, Approximate counting of minimal unsatisfiable subsets, 439-462 [Zbl 07306422]
\textit{Soos, Mate; Gocht, Stephan; Meel, Kuldeep S.}, Tinted, detached, and lazy CNF-XOR solving and its applications to counting and sampling, 463-484 [Zbl 07306423]
\textit{Temel, Mertcan; Slobodova, Anna; Hunt, Warren A. Jr.}, Automated and scalable verification of integer multipliers, 485-507 [Zbl 07306424]
\textit{Slivovsky, Friedrich}, Interpolation-based semantic gate extraction and its applications to QBF preprocessing, 508-528 [Zbl 07306425]
\textit{Kölbl, Martin; Leue, Stefan; Wies, Thomas}, TarTar: a timed automata repair tool, 529-540 [Zbl 07306426]
\textit{Devonport, Alex; Khaled, Mahmoud; Arcak, Murat; Zamani, Majid}, PIRK: scalable interval reachability analysis for high-dimensional nonlinear systems, 556-568 [Zbl 07306428]
\textit{Beneš, Nikola; Brim, Luboš; Kadlecaj, Jakub; Pastva, Samuel; Šafránek, David}, AEON: attractor bifurcation analysis of parametrised Boolean networks, 569-581 [Zbl 07306429]
\textit{Chen, Xin; Peng, Chao; Lin, Wang; Yang, Zhengfeng; Zhang, Yifang; Li, Xuandong}, A novel approach for solving the BMI problem in barrier certificates generation, 582-603 [Zbl 07306430]
\textit{Sankaranarayanan, Sriram}, Reachability analysis using message passing over tree decompositions, 604-628 [Zbl 07306431]
\textit{Fan, Chuchu; Miller, Kristina; Mitra, Sayan}, Fast and guaranteed safe controller synthesis for nonlinear vehicle models, 629-652 [Zbl 07306432]
\textit{Češka, Milan; Chau, Calvin; Křetínský, Jan}, SeQuaiA: a scalable tool for semi-quantitative analysis of chemical reaction networks, 653-666 [Zbl 07306433]On the complexity of all \(( g , f )\)-factors problem.https://www.zbmath.org/1453.051032021-02-27T13:50:00+00:00"Lu, Hongliang"https://www.zbmath.org/authors/?q=ai:lu.hongliang"Wang, Wei"https://www.zbmath.org/authors/?q=ai:wang.wei.5"Jiang, Yaolin"https://www.zbmath.org/authors/?q=ai:jiang.yaolinSummary: Let \(G\) be a graph with vertex set \(V\) and let \(g , f : V \to \mathbb{Z}^+\) be two functions such that \(g \leq f\). We say that \(G\) has all \(( g , f )\)-factors if \(G\) has an \(h\)-factor for every \(h : V \to \mathbb{Z}^+\) such that \(g ( v ) \leq h ( v ) \leq f ( v )\) for every \(v \in V\) and \(\sum_{v \in V} h ( v ) \equiv 0 \pmod 2\). Two decades ago, \textit{Th. Niessen} [J. Comb. Theory, Ser. B 72, No. 1, 152--156 (1998; Zbl 0888.05048)] derived from Tutte's \(f\)-factor theorem a similar characterization for the property of graphs having all \(( g , f )\)-factors and asked whether there is a polynomial time algorithm for testing whether a graph \(G\) has all \(( g , f )\)-factors. In this paper, we show that it is NP-hard to determine whether a graph \(G\) has all \(( g , f )\)-factors, which gives a negative answer to the question of Niessen.A linear algebra method to decompose forms whose length is lower than the number of variables into weighted sum of squares.https://www.zbmath.org/1453.930432021-02-27T13:50:00+00:00"Menini, Laura"https://www.zbmath.org/authors/?q=ai:menini.laura"Possieri, Corrado"https://www.zbmath.org/authors/?q=ai:possieri.corrado"Tornambè, Antonio"https://www.zbmath.org/authors/?q=ai:tornambe.antonioSummary: In this paper, an algorithm based on linear algebra tools is proposed to compute a weighted sum of squares decomposition of a given form whose length is lower than the number of variables. Such an objective is pursued by using linear algebra techniques to perform tasks that are usually carried out through computational algebraic geometry tools. Several examples are reported to show that the use of linear algebra rather than algebraic geometry leads to a reduction of the execution times, without affecting the effectiveness of the algorithm. Applications of the given procedure to system analysis and to control design problems are reported as well as a detailed complexity analysis.Finite characterization of the coarsest balanced coloring of a network.https://www.zbmath.org/1453.050392021-02-27T13:50:00+00:00"Stewart, Ian"https://www.zbmath.org/authors/?q=ai:stewart.ian-nPrefix and suffix reversals on strings.https://www.zbmath.org/1453.682272021-02-27T13:50:00+00:00"Fertin, Guillaume"https://www.zbmath.org/authors/?q=ai:fertin.guillaume"Jankowiak, Loïc"https://www.zbmath.org/authors/?q=ai:jankowiak.loic"Jean, Géraldine"https://www.zbmath.org/authors/?q=ai:jean.geraldineSummary: The \textsc{Sorting by Prefix Reversals} problem consists in sorting the elements of a given permutation \(\pi \) with a minimum number of prefix reversals, i.e. reversals that always imply the leftmost element of \(\pi \). A natural extension of this problem is to consider strings (in which any letter may appear several times) rather than permutations. In strings, three different types of problems arise: \textit{grouping} (starting from a string \(S\), transform it so that all identical letters are consecutive), \textit{sorting} (a constrained version of grouping, in which the target string must be lexicographically ordered) and \textit{rearranging} (given two strings \(S\) and \(T\), transform \(S\) into \(T\)). In this paper, we study these three problems, under an algorithmic viewpoint, in the setting where two operations (rather than one) are allowed: namely, \textit{prefix and suffix} reversals -- where a suffix reversal must always imply the rightmost element of the string. We first give elements of comparison between the ``prefix reversals only'' case and our case. The algorithmic results we obtain on these three problems depend on the size \(k\) of the alphabet on which the strings are built. In particular, we show that the grouping problem is in P for \(k\in [2;4]\) and when \(n-k=O(1)\), where \(n\) is the length of the string. We also show that the grouping problem admits a PTAS for any constant \(k\), and is 2-approximable for any \(k\). Concerning sorting, it is in P for \(k\in [2;3]\), admits a PTAS for constant \(k\), and is NP-hard for \(k=n\). Finally, concerning the rearranging problem, we show that it is NP-hard, both for \(k=O(1)\) and \(k=n\). We also show that the three problems are FPT when the parameter is the maximum number of blocks over the source and target strings.
For the entire collection see [Zbl 1320.68019].On polyatomic tomography over abelian groups: some remarks on consistency, tree packings and complexity.https://www.zbmath.org/1453.681142021-02-27T13:50:00+00:00"Gritzmann, Peter"https://www.zbmath.org/authors/?q=ai:gritzmann.peter"Langfeld, Barbara"https://www.zbmath.org/authors/?q=ai:langfeld.barbaraSummary: The paper deals with an inverse problem of reconstructing matrices from their marginal sums. More precisely, we are interested in the existence of \(r \times s\) matrices for which only the following information is available: The entries belong to known subsets of \(c\) distinguishable abelian groups, and the row and column sums of all entries from each group are given. This generalizes Ryser's classical problem of characterizing the set of all 0-1-matrices with given row and column sums and is a basic problem in (polyatomic) discrete tomography. We show that the problem is closely related to packings of trees in bipartite graphs, prove consistency results, give algorithms and determine its complexity. In particular, we find a somewhat unusual complexity behavior: the problem is hard for ``small'' but easy for ``large'' matrices.On graphs with 2 trivial distance ideals.https://www.zbmath.org/1453.050432021-02-27T13:50:00+00:00"Alfaro, Carlos A."https://www.zbmath.org/authors/?q=ai:alfaro.carlos-aDistance ideals generalize the Smith normal form of the distance matrix of a graph. The family of graphs with 2 trivial distance ideals contains the family of graphs whose distance matrix has at most 2 invariant factors equal to 1. Here, the author gives an infinite family of forbidden induced subgraphs for the graphs with 2 trivial distance ideals. They are also related with other well known graph classes.
Let \(G=(V,E)\) be a connected graph and \(X_G=\{x_u : u \in V(G)\}\) a set of indeterminates. The distance \(d_G(u,v)\) between the vertices \(u\) and \(v\) is the number of edges of a shortest path between them. Let \(\operatorname{diag}(X_G)\) denote the diagonal matrix with the indeterminates in the diagonal. The distance matrix \(D(G)\) of \(G\) is the matrix with rows and columns indexed by the vertices of \(G\) where the \(uv\)-entry is equal to \(d_G(u,v)\). Thus the generalized distance matrix \(D(G,X_G)\) of \(G\) is the matrix with rows and columns indexed by the vertices of \(G\) defined as \(\operatorname{diag}(X_G)+D(G)\). Note that one can recover the distance matrix from the generalized distance matrix by evaluating \(X_G\) at the zero vector, that is, \(D(G) = D(G,0)\).
Let \(R[X_G]\) be the polynomial ring over a commutative ring \(R\) in the variables \(X_G\). For all \(i \in \lfloor n\rfloor := \{1,2,\dots,n\}\), the \(i\)-th distance ideal \(I^{R}_{i}(G,X_G)\) of \(G\) is the ideal over \(R[X_G]\) given by \(\langle\mathrm{minors}_i(D(G,X_G))\rangle\), where \(n\) is the number of vertices of \(G\) and \(\mathrm{minors}_i(D(G,X_G))\) is the set of the determinants of the \(i \times i\) submatrices of \(D(G,X_G)\).
Distance ideals were defined by \textit{C. A. Alfaro} and \textit{L. Taylor} [Linear Algebra Appl. 584, 127--144 (2020; Zbl 1426.05060)] as a generalization of the Smith normal form of distance matrix and the distance spectra of graphs. In this paper, the discussion is held in the case when \(R\) is the ring of integers of numbers.
Smith normal forms have been useful in understanding algebraic properties of combinatorial objects. For instance, computing the Smith normal form of the adjacency or Laplacian matrix is a standard technique used to determine the Smith group and the critical group of a graph.
Smith normal forms can be computed using rows and column operations. In fact, \textit{R. Kannan} and \textit{A. Bachem} [SIAM J. Comput. 8, 499--507 (1979; Zbl 0446.65015)] found polynomial algorithms for computing the Smith normal form of an integer matrix. An alternative way of obtaining the Smith normal form is as follows. Let \(\Delta_i(G)\) denote the greatest common divisor of the \(i\)-minors of the distance matrix \(D(G)\), then its \(i\)-th invariant factor \(f_i\) is equal to \(\Delta_i(G)/\Delta_{i-1}(G)\), where \(\Delta_0(G) = 1\). Thus the Smith normal form of \(D(G)\) is equal to \(\operatorname{diag}(f_1,f_2,\dots,f_r,0,\dots,0)\). It is known that the Smith normal form may not exist in ring \(\mathbb{Z}[X]\), for example consider the following matrix with entries in the ring \(\mathbb{Z}[x]\)
\[
\begin{bmatrix}
2 & 0 \\
0 & x \\
\end{bmatrix}
\quad.
\]
Little is known about the Smith normal forms of distance matrices. In the paper by \textit{Y. Hou} and \textit{C. Woo} [Linear Multilinear Algebra 56, No. 6, 611--626 (2008; Zbl 1149.05033)], the Smith normal forms of the distance matrices were determined for trees, wheels, cycles, and complements of cycles and are partially determined for complete multipartite graphs. In the paper by \textit{R. B. Bapat} and \textit{M. Karimi} [ibid. 65, No. 6, 1117--1130 (2017; Zbl 1360.05094)], the Smith normal form of the distance matrices of unicyclic graphs and of the wheel graph with trees attached to each vertex were obtained.
An ideal is said to be unit or trivial if it is equal to \(\left\langle 1\right\rangle\). Let \(\Phi(G)\) denote the maximum integer \(i\) for which \(I^{\mathbb{Z}}_{i}(G,X_G)\) is trivial. Let \(\Lambda_k\) denote the family of graphs with at most \(k\) trivial distance ideals over \(\mathbb{Z}\). Note that every graph with at least one non-loop edge has at least one trivial distance ideals. On the other hand, let \(\phi(G)\) denote the number of invariant factors of the distance matrix of \(G\) equal to 1. In this case, every graph with at least one non-loop edge has at least two invariant factors equal to one.
This paper intends to explore the properties of the family \(\Lambda_2\) of graphs with at most two trivial distance ideals over \(\mathbb{Z}\). In particular, the author has found an infinite number of graphs that are forbidden for \(\Lambda_2\). It is defined that \(F\) is the set of 17 graphs with certain properties. In Section 2, the author has proved that graphs in \(\Lambda_2\) are \(\left\{F,\text{odd-holes}\right\}\)-free graphs, where odd-holes are cycles of odd length greater or equal than 7.
One of the main application in finding a characterization of \(\Lambda_2\) is that it is an approach to obtain a characterization of the graphs with \(\phi_Z(G)=2\) since they are contained in \(\Lambda_2\). It has been proved that the distance matrix of trees has exactly 2 invariant factors equal to 1. Therefore,
\[ \text{trees} \subseteq \Lambda_2\subseteq \{F, \text{odd-holes}\}-\text{free graphs}. \]
Among the forbidden graphs for \(\Lambda_2\) there are several graphs studied in other contexts, like bull and odd-holes. Another related family is the family of 3-leaf powers that was characterized as \(\left\{\text{bull, dart, gem}\right\}\)-free chordal graphs.
Distance-hereditary graphs are another related family defined by \textit{E. Howorka} [Q. J. Math., Oxf. II. Ser. 28, 417--420 (1977; Zbl 0376.05040)]. A graph \(G\) is distance-hereditary if for each connected induced subgraph \(H\) of \(G\) and every pair \(u\) and \(v\) of vertices in \(H\), \(d_H(u,v) = d_G(u,v)\). Distance-hereditary graphs are \(\left\{\text{house, gem, domino, holes}\right\}\)-free graphs, where holes are cycles of length greater than or equal 5 which intersects with \(\Lambda_2\). Also, if \(H\) is a connected induced subgraph of a distance-hereditary graph \(G\), then \(I^{R}_{i}(H, X_H) \subseteq I^{R}_{i}(G,X_G)\) for all \(i \leq \left|V(H)\right|\).
Previously, an analogous notion to the distance ideals but for the adjacency and Laplacian matrices was explored. They are called critical ideal. It was found new connections in contexts different from the Smith group or sandpile group like the zero-forcing number and the minimum rank of a graph. In this setting, the set of forbidden graphs for the family with at most \(k\) trivial critical ideals is conjectured to be finite. It is interesting that for distance ideals this is not true.
A graph \(G\) is forbidden for \(\Lambda_k\) if the \((k+1)\)-th distance ideal of \(G\) is trivial. In addition, a forbidden graph \(H\) for \(\Lambda_k\) is minimal if \(H\) does not contain a connected forbidden graph for \(\Lambda_k\) as induced subgraph, and any graph \(G\) containing \(H\) as induced subgraph, have that \(G\) is forbidden for \(\Lambda_k\). The set of minimal forbidden graphs for \(\Lambda_k\) is denoted by \(\mathrm{Forb}_k\).
The author prove several results. The paper is well-written. The standard of the paper is good. Researchers will benefit a lot by reading this paper.
Reviewer: A. Lourdusamy (Palayamkottai)Training a classifier by descriptors in the space of the Radon transform.https://www.zbmath.org/1453.681692021-02-27T13:50:00+00:00"Samsonov, N. A."https://www.zbmath.org/authors/?q=ai:samsonov.n-a"Gneushev, A. N."https://www.zbmath.org/authors/?q=ai:gneushev.a-n"Matveev, I. A."https://www.zbmath.org/authors/?q=ai:matveev.ivan-aSummary: The problem of detecting objects in images is solved by training the classifier by the descriptors constructed based on the local Radon transform of the gradient field of the image. The space of the Radon transform is considered as the Hough space accumulator in which projections are constructed. The set of local projections forms a descriptor of the region related to the object that is a generalization of the known histogram of oriented gradients (HOG) descriptor. The issues of the effect of the approximation of the Radon transform contribution function, the form of local normalization, and the number of directions in the projection histograms on the results of detecting pedestrians are investigated. The results produced by the proposed descriptor are compared with the results obtained using the HOG descriptor and convolutional neural networks (CNN) based on the ResNext and MobileNet architectures on the INRIA and CityScapes databases.Self-adapting control parameters with multi-parent crossover in differential evolution algorithm.https://www.zbmath.org/1453.901932021-02-27T13:50:00+00:00"Fan, Yuanyuan"https://www.zbmath.org/authors/?q=ai:fan.yuanyuan"Liang, Qingzhong"https://www.zbmath.org/authors/?q=ai:liang.qingzhong"Liu, Chao"https://www.zbmath.org/authors/?q=ai:liu.chao"Yan, Xuesong"https://www.zbmath.org/authors/?q=ai:yan.xuesongSummary: The performance of differential evolution (DE) algorithm is influenced by the setting of control parameters, which is quite dependent on the problem and difficult to be determined. Therefore, the studies on parameter adaptation mechanisms have gradually become more popular. In this paper, we present a self-adaptive DE algorithm (GaDE), in which the adaptation of amplification factor and crossover rate is executed with a multi-parent crossover, while the adaptation timing is decided by the comparative result between the target vector and its offspring. The performance of GaDE algorithm is evaluated on a suite of bound-constrained numerical optimisation problems. The results show that our algorithm is better than, or at least comparable to, the canonical DE, and the two other adaptive DE algorithms.PCPs and the hardness of generating synthetic data.https://www.zbmath.org/1453.941212021-02-27T13:50:00+00:00"Ullman, Jonathan"https://www.zbmath.org/authors/?q=ai:ullman.jonathan-r"Vadhan, Salil"https://www.zbmath.org/authors/?q=ai:vadhan.salil-pSummary: Assuming the existence of one-way functions, we show that there is no polynomial-time differentially private algorithm \(\mathcal{A}\) that takes a database \(D\in (\{0,1\}^d)^n\) and outputs a ``synthetic database'' \(\hat{D}\) all of whose two-way marginals are approximately equal to those of \(D\). (A two-way marginal is the fraction of database rows \(x\in \{0,1\}^d\) with a given pair of values in a given pair of columns.) This answers a question of \textit{B. Barak} et al. [Privacy, accuracy, and consistency too: A holistic solution to contingency table release. in: Proceedings of the 26th Symposium on Principles of Database Systems, PODS 2007, 273--282 (2007)], who gave an algorithm running in time \(\operatorname{poly}(n,2^d)\). Our proof combines a construction of hard-to-sanitize databases based on digital signatures (by \textit{C. Dwork} et al. [``When and how can privacy-preserving data release be done efficiently?'' In: Proceedings of the 2009 International ACM Symposium on Theory of Computing (STOC 2009)] with encodings based on the PCP theorem. We also present both negative and positive results for generating ``relaxed'' synthetic data, where the fraction of rows in \(D\) satisfying a predicate \(c\) are estimated by applying \(c\) to each row of \(\hat{D}\) and aggregating the results in some way.A formal security analysis of the Signal messaging protocol.https://www.zbmath.org/1453.941332021-02-27T13:50:00+00:00"Cohn-Gordon, Katriel"https://www.zbmath.org/authors/?q=ai:cohn-gordon.katriel"Cremers, Cas"https://www.zbmath.org/authors/?q=ai:cremers.cas-j-f"Dowling, Benjamin"https://www.zbmath.org/authors/?q=ai:dowling.benjamin"Garratt, Luke"https://www.zbmath.org/authors/?q=ai:garratt.luke"Stebila, Douglas"https://www.zbmath.org/authors/?q=ai:stebila.douglasSummary: The Signal protocol is a cryptographic messaging protocol that provides end-to-end encryption for instant messaging in WhatsApp, Wire, and Facebook Messenger among many others, serving well over 1 billion active users. Signal includes several uncommon security properties (such as ``future secrecy'' or ``post-compromise security''), enabled by a technique called \textit{ratcheting} in which session keys are updated with every message sent. We conduct a formal security analysis of Signal's initial extended triple Diffie-Hellman (X3DH) key agreement and Double Ratchet protocols as a multi-stage authenticated key exchange protocol. We extract from the implementation a formal description of the abstract protocol and define a security model which can capture the ``ratcheting'' key update structure as a multi-stage model where there can be a ``tree'' of stages, rather than just a sequence. We then prove the security of Signal's key exchange core in our model, demonstrating several standard security properties. We have found no major flaws in the design and hope that our presentation and results can serve as a foundation for other analyses of this widely adopted protocol.A categorical reduction system for linear logic.https://www.zbmath.org/1453.030082021-02-27T13:50:00+00:00"Hasegawa, Ryu"https://www.zbmath.org/authors/?q=ai:hasegawa.ryuComputational aspects of the lambda calculus suggests that the corresponding Cartesian closed category may well be given a dynamic computational mechanism. The idea in [\textit{R. Seely}, ``Modelling computations: a \(2\)-categorical framework'', in: Proceedings of the symposium on logic in computer science, LICS'87. Los Alamitos: IEEE Computer Society. 65--71 (1987)] looks natural, but has not been pursued further.
This paper introduces a rewriting system on the categorical semantics of linear logic, namely, a free (intuitionistic or classical) linear category put down as a calculus. It is shown that this calculus abides by the weak termination property. The author intends in a forthcoming paper to show that it is pretty much confluent. These two results surely imply that each morphism
has a unique normal form as far as no units are involved. A dividend is mechanization of diagram chasing.
Reviewer: Hirokazu Nishimura (Tsukuba)Guest editors' foreword.https://www.zbmath.org/1453.680102021-02-27T13:50:00+00:00"Barequet, Gill (ed.)"https://www.zbmath.org/authors/?q=ai:barequet.gill"Wang, Yusu (ed.)"https://www.zbmath.org/authors/?q=ai:wang.yusuFrom the text: This special issue of Discrete \& Computational Geometry contains a selection of the best papers that were presented at the 35th Annual Symposium on Computational Geometry, which was held in Portland (Oregon), USA, on June 18--21, 2019. The seven papers in this special issue were invited, submitted, reviewed, and revised according to the usual high standards of the journal. It is our pleasure to briefly introduce these contributionsProbabilistic graphical models. Principles and applications. 2nd fully updated edition.https://www.zbmath.org/1453.680012021-02-27T13:50:00+00:00"Sucar, Luis Enrique"https://www.zbmath.org/authors/?q=ai:sucar.luis-enriquePublisher's description: This fully updated new edition of a uniquely accessible textbook/reference provides a general introduction to probabilistic graphical models (PGMs) from an engineering perspective. It features new material on partially observable Markov decision processes, causal graphical models, causal discovery and deep learning, as well as an even greater number of exercises; it also incorporates a software library for several graphical models in Python.
The book covers the fundamentals for each of the main classes of PGMs, including representation, inference and learning principles, and reviews real-world applications for each type of model. These applications are drawn from a broad range of disciplines, highlighting the many uses of Bayesian classifiers, hidden Markov models, Bayesian networks, dynamic and temporal Bayesian networks, Markov random fields, influence diagrams, and Markov decision processes.
Topics and features:
\begin{itemize}
\item
Presents a unified framework encompassing all of the main classes of PGMs
\item Explores the fundamental aspects of representation, inference and learning for each technique
\item Examines new material on partially observable Markov decision processes, and graphical models
\item Includes a new chapter introducing deep neural networks and their relation with probabilistic graphical models
\item Covers multidimensional Bayesian classifiers, relational graphical models, and causal models
\item Provides substantial chapter-ending exercises, suggestions for further reading, and ideas for research or programming projects
\item Describes classifiers such as Gaussian Naive Bayes, Circular Chain Classifiers, and Hierarchical Classifiers with Bayesian Networks
\item Outlines the practical application of the different techniques
\item Suggests possible course outlines for instructors
\end{itemize}
This classroom-tested work is suitable as a textbook for an advanced undergraduate or a graduate course in probabilistic graphical models for students of computer science, engineering, and physics. Professionals wishing to apply probabilistic graphical models in their own field, or interested in the basis of these techniques, will also find the book to be an invaluable reference.
For a review of the first edition see [Zbl 1325.68007].Every nonnegative real number is an abelian critical exponent.https://www.zbmath.org/1453.681462021-02-27T13:50:00+00:00"Peltomäki, Jarkko"https://www.zbmath.org/authors/?q=ai:peltomaki.jarkko"Whiteland, Markus A."https://www.zbmath.org/authors/?q=ai:whiteland.markus-aTwo finite words are abelian equivalent if they contain the same number of occurrences of each letter. An abelian power of exponent \(e\) and period \(m\) is a concatenation of \(e\) abelian equivalent \(m\)-letter words. The abelian critical exponent of an infinite word \(w\) is the \(\limsup\) as \(m \to \infty\) of the highest exponent of a period-\(m\) abelian power occurring as a factor of \(w\), divided by \(m\). It was proved by \textit{G. Fici} et al. [Theor. Comput. Sci. 635, 16--34 (2016; Zbl 1346.68150)] that the set of abelian critical exponents of Sturmian words is equal to the Lagrange spectrum. Here, the authors show that the remaining nonnegative real numbers are abelian critical exponents of words on a three-letter alphabet. Similar results hold for \(k\)-abelian equivalence, which means that two words contain the same number of occurrences of all words of length at most \(k\).
It is an open question whether all real numbers are (\(k\)-)abelian critical exponents of binary words. This was proved by the authors for \(k=\infty\) in [Acta Arith. 194, No. 2, 135--154 (2020; Zbl 07221811)]; note that \(\infty\)-abelian equivalence is the usual equality of words but the \(\infty\)-abelian critical exponent defined as above differs from the usual critical exponent.
For the entire collection see [Zbl 1419.68014].
Reviewer: Wolfgang Steiner (Paris)Edge-preserving integration of a normal field: weighted least-squares, TV and \(L^1\) approaches.https://www.zbmath.org/1453.940162021-02-27T13:50:00+00:00"Quéau, Yvain"https://www.zbmath.org/authors/?q=ai:queau.yvain"Durou, Jean-Denis"https://www.zbmath.org/authors/?q=ai:durou.jean-denisSummary: We introduce several new functionals, inspired from variational image denoising models, for recovering a piecewise-smooth surface from a dense estimation of its normal field. In the weighted least-squares approach, the non-differentiable elements of the surface are a priori detected so as to weight the least-squares model. To avoid this detection step, we introduce reweighted least-squares for minimising an isotropic TV-like functional, and split-Bregman iterations for \(L^1\) minimisation.
For the entire collection see [Zbl 1362.68008].A genetic algorithm estimation of the term structure of interest rates.https://www.zbmath.org/1453.620962021-02-27T13:50:00+00:00"Gimeno, Ricardo"https://www.zbmath.org/authors/?q=ai:gimeno.ricardo"Nave, Juan M."https://www.zbmath.org/authors/?q=ai:nave.juan-mSummary: The term structure of interest rates is a key instrument for financial research. It provides relevant information for pricing deterministic financial cash flows, it measures economic market expectations and it is extremely useful when assessing the effectiveness of monetary policy decisions. However, it is not directly observable and needs to be estimated by smoothing asset pricing data through statistical techniques. The most popular techniques adjust parsimonious functional forms based on bond yields to maturity. Unfortunately, these functions, which need to be optimised, are highly non-linear which make them very sensitive to the initial conditions. In this context, this paper proposes the use of genetic algorithms to find the values for the initial conditions and to reduce the risk of false convergence, showing that stable parameters are obtained without imposing arbitrary restrictions.Multiscale texture orientation analysis using spectral total-variation decomposition.https://www.zbmath.org/1453.940092021-02-27T13:50:00+00:00"Horesh, Dikla"https://www.zbmath.org/authors/?q=ai:horesh.dikla"Gilboa, Guy"https://www.zbmath.org/authors/?q=ai:gilboa.guySummary: Multi-level texture separation can considerably improve texture analysis, a significant component in many computer vision tasks. This paper aims at obtaining precise local texture orientations of images in a multiscale manner, characterizing the main obvious ones as well as the very subtle ones. We use the total variation spectral framework to decompose the image into its different textural scales. Gabor filter banks are then employed to detect prominent orientations within the multiscale representation. A necessary condition for perfect texture separation is given, based on the spectral total-variation theory. We show that using this method we can detect and differentiate a mixture of overlapping textures and obtain with high fidelity a multi-valued orientation representation of the image.
For the entire collection see [Zbl 1362.68008].Luminance-Hue specification in the RGB space.https://www.zbmath.org/1453.940142021-02-27T13:50:00+00:00"Pierre, Fabien"https://www.zbmath.org/authors/?q=ai:pierre.fabien"Aujol, Jean-François"https://www.zbmath.org/authors/?q=ai:aujol.jean-francois"Bugeau, Aurélie"https://www.zbmath.org/authors/?q=ai:bugeau.aurelie"Ta, Vinh-Thong"https://www.zbmath.org/authors/?q=ai:ta.vinh-thongSummary: This paper is concerned with a problem arising when editing color images, namely the Luminance-Hue Specification. This problem often occurs when converting an edited image in a given color-space to RGB. Indeed, the colors often get out of the standard range of the RGB space which is commonly used by most of display hardware. Simple truncations lead to inconsistency in the hue and luminance of the edited image. We formalize and describe this problem from a geometrical point of view. A fast algorithm to solve the considered problem is given. We next focus on its application to image colorization in the RGB color space while most of the methods use other ones. Using directly the three RGB channels, our model avoids artifact effects which appear with other color spaces. Finally a variational model that regularizes color images while dealing with Luminance Hue Specification problem is proposed.
For the entire collection see [Zbl 1362.68008].Interactive multi-label segmentation of RGB-D images.https://www.zbmath.org/1453.940052021-02-27T13:50:00+00:00"Diebold, Julia"https://www.zbmath.org/authors/?q=ai:diebold.julia"Demmel, Nikolaus"https://www.zbmath.org/authors/?q=ai:demmel.nikolaus"Hazırbaş, Caner"https://www.zbmath.org/authors/?q=ai:hazirbas.caner"Moeller, Michael"https://www.zbmath.org/authors/?q=ai:moller.michael"Cremers, Daniel"https://www.zbmath.org/authors/?q=ai:cremers.danielSummary: We propose a novel interactive multi-label RGB-D image segmentation method by extending spatially varying color distributions [\textit{C. Nieuwenhuis} and \textit{D. Cremers}, Spatially varying color distributions for interactive multilabel segmentation. Trans. Pattern Anal. Mach. Intell. 35, No. 5, 1234-1247 (2013; \url{10.1109/TPAMI.2012.183})] to additionally utilize depth information in two different ways. On the one hand, we consider the depth image as an additional data channel. On the other hand, we extend the idea of spatially varying color distributions in a plane to volumetrically varying color distributions in 3D. Furthermore, we improve the data fidelity term by locally adapting the influence of nearby scribbles around each pixel. Our approach is implemented for parallel hardware and evaluated on a novel interactive RGB-D image segmentation benchmark with pixel-accurate ground truth. We show that depth information leads to considerably more precise segmentation results. At the same time significantly less user scribbles are required for obtaining the same segmentation accuracy as without using depth clues.
For the entire collection see [Zbl 1362.68008].Selecting marker genes for cancer classification using supervised weighted kernel clustering and the support vector machine.https://www.zbmath.org/1453.621982021-02-27T13:50:00+00:00"Shim, Jooyong"https://www.zbmath.org/authors/?q=ai:shim.jooyong"Sohn, Insuk"https://www.zbmath.org/authors/?q=ai:sohn.insuk"Kim, Sujong"https://www.zbmath.org/authors/?q=ai:kim.sujong"Lee, Jae Won"https://www.zbmath.org/authors/?q=ai:lee.jaewon"Green, Paul E."https://www.zbmath.org/authors/?q=ai:green.paul-e"Hwang, Changha"https://www.zbmath.org/authors/?q=ai:hwang.changhaSummary: Due to recent interest in the analysis of DNA microarray data, new methods have been considered and developed in the area of statistical classification. In particular, according to the gene expression profile of existing data, the goal is to classify the sample into a relevant diagnostic category. However, when classifying outcomes into certain cancer types, it is often the case that some genes are not important, while some genes are more important than others. A novel algorithm is presented for selecting such relevant genes referred to as marker genes for cancer classification. This algorithm is based on the Support Vector Machine (SVM) and Supervised Weighted Kernel Clustering (SWKC). To investigate the performance of this algorithm, the methods were applied to a simulated data set and some real data sets. For comparison, some other well-known methods such as Prediction Analysis of Microarrays (PAM), Support Vector Machine-Recursive Feature Elimination (SVM-RFE), and a Structured Polychotomous Machine (SPM) were considered. The experimental results indicate that the proposed SWKC/SVM algorithm is conceptually much simpler and performs more efficiently than other existing methods used in identifying marker genes for cancer classification. Furthermore, the SWKC/SVM algorithm has the advantage that it requires much less computing time compared with the other existing methods.Informative transcription factor selection using support vector machine-based generalized approximate cross validation criteria.https://www.zbmath.org/1453.622022021-02-27T13:50:00+00:00"Sohn, Insuk"https://www.zbmath.org/authors/?q=ai:sohn.insuk"Shim, Jooyong"https://www.zbmath.org/authors/?q=ai:shim.jooyong"Hwang, Changha"https://www.zbmath.org/authors/?q=ai:hwang.changha"Kim, Sujong"https://www.zbmath.org/authors/?q=ai:kim.sujong"Lee, Jae Won"https://www.zbmath.org/authors/?q=ai:lee.jaewonSummary: The genetic regulatory mechanism plays a pivotal role in many biological processes ranging from development to survival. The identification of the common transcription factor binding sites (TFBSs) from a set of known co-regulated gene promoters and the identification of genes that are regulated by the transcription factor (TF) that have important roles in a particular biological function will advance our understanding of the interaction among the co-regulated genes and intricate genetic regulatory mechanism underlying this function. To identify the common TFBSs from a set of known co-regulated gene promoters and classify genes that are regulated by TFs, the new approaches using Support Vector Machine (SVM)-based Generalized Approximate Cross Validation (GACV) criteria are proposed. Two variable selection methods are considered for Recursive Feature Elimination (RFE) and Recursive Feature Addition (RFA). Performances of the proposed methods are compared with the existing SVM-based criteria, Logistic Regression Analysis (LRA), Logic Regression (LR), and Decision Tree (DT) methods by using both two real TF target genes data and the simulated data. In terms of test error rates, the proposed methods perform better than the existing methods.Exact algorithms for the repetition-bounded longest common subsequence problem.https://www.zbmath.org/1453.682262021-02-27T13:50:00+00:00"Asahiro, Yuichi"https://www.zbmath.org/authors/?q=ai:asahiro.yuichi"Jansson, Jesper"https://www.zbmath.org/authors/?q=ai:jansson.jesper"Lin, Guohui"https://www.zbmath.org/authors/?q=ai:lin.guohui"Miyano, Eiji"https://www.zbmath.org/authors/?q=ai:miyano.eiji"Ono, Hirotaka"https://www.zbmath.org/authors/?q=ai:ono.hirotaka"Utashima, Tadatoshi"https://www.zbmath.org/authors/?q=ai:utashima.tadatoshiSummary: In this paper, we study exact, exponential-time algorithms for a variant of the classic \textsc{Longest Common Subsequence} problem called the \textsc{Repetition-Bounded Longest Common Subsequence} problem (or \textsf{RBLCS}, for short): Let an alphabet \(S\) be a finite set of symbols and an occurrence constraint \(C_{\mathrm{occ}}\) be a function \(C_{\mathrm{occ}} : S \to \mathbb{N} \), assigning an upper bound on the number of occurrences of each symbol in \(S\). Given two sequences \(X\) and \(Y\) over the alphabet \(S\) and an occurrence constraint \(C_{\mathrm{occ}}\), the goal of \textsf{RBLCS} is to find a longest common subsequence of \(X\) and \(Y\) such that each symbol \(s \in S\) appears at most \(C_{\mathrm{occ}}(s)\) times in the obtained subsequence. The special case where \(C_{\mathrm{occ}}(s) = 1\) for every symbol \(s \in S\) is known as the \textsc{Repetition-Free Longest Common Subsequence} problem (\textsf{RFLCS}) and has been studied previously;
e.g., in [Discrete Appl. Math. 158, No. 12, 1315--1324 (2010; Zbl 1226.05002)], \textit{S. S. Adi} et al.
presented a simple (exponential-time) exact algorithm for \textsf{RFLCS}. However, they did not analyze its time complexity in detail, and to the best of our knowledge, there are no previous results on the running times of any exact algorithms for this problem. Without loss of generality, we will assume that \(| X | \leq | Y |\) and \(| X | = n\). In this paper, we first propose a simpler algorithm for \textsf{RFLCS} based on the strategy
used in [loc. cit.] and show explicitly that its running time is \(O( 1.44225^n)\). Next, we provide a dynamic programming (DP) based algorithm for \textsf{RBLCS} and prove that its running time is \(O( 1.44225^n)\) for any occurrence constraint \(C_{\mathrm{occ}}\), and even less in certain special cases. In particular, for \textsf{RFLCS}, our DP-based algorithm runs in \(O( 1.41422^n)\) time, which is faster than the previous one. Furthermore, we prove NP-hardness and APX-hardness results for \textsf{RBLCS} on restricted instances.Approximation algorithms for the partial assignment problem.https://www.zbmath.org/1453.910542021-02-27T13:50:00+00:00"Gao, Guichen"https://www.zbmath.org/authors/?q=ai:gao.guichen"Ning, Li"https://www.zbmath.org/authors/?q=ai:ning.li"Ting, Hing-Fung"https://www.zbmath.org/authors/?q=ai:ting.hing-fung"Xu, Yicheng"https://www.zbmath.org/authors/?q=ai:xu.yicheng"Zhang, Yong"https://www.zbmath.org/authors/?q=ai:zhang.yong.8"Zou, Yifei"https://www.zbmath.org/authors/?q=ai:zou.yifeiSummary: In the partial assignment problem, a seller has an item set \(M = \{ i_1 , i_2 , \ldots , i_m \}\), the amount of each item is exactly one. There are \(n\) buyers \(N = \{ b_1, b_2, . . ., b_n \}\), each buyer \(b_p\) has a preferred bundle \(B_p \subseteq M\) and a value function \(f_p(\cdot)\). Assume that each item should be sold integrally and thus can be only assigned to at most one buyer. In previous works, buyers are often considered to be single-minded, i.e., a buyer can be either assigned the whole preferred bundle, or nothing. In this paper, we consider a more generalized and realistic model where the buyer can be partially satisfied, i.e., buyer \(b_p\) can have some positive value if the seller assigns a subset of \(b_p\)'s preferred bundle. However, there might be exponential number of subsets, to tackle this situation, a value oracle is implemented. We can get the value \(f_p( S_p)\) for buyer \(b_p\) and \(S_p \subseteq B_p\) by querying the value oracle. The objective is to assign items to buyers such that the total values are maximized, i.e., \( \max \sum_{p = 1}^n f_p( S_p)\). We first show that in this model, maximizing the total values is NP-hard. We then propose provably efficient approximation algorithms for general and submodular value functions respectively. If the value function satisfies non-negative, monotone and normalized, an \(1 / \sqrt{ m} \)-approximation algorithm can be achieved. If the value function is submodular, the total values can be approximated within a factor of \((1 - 1 / e)\).Improved bounds for two query adaptive bitprobe schemes storing five elements.https://www.zbmath.org/1453.680612021-02-27T13:50:00+00:00"Baig, Mirza Galib Anwarul Husain"https://www.zbmath.org/authors/?q=ai:baig.mirza-galib-anwarul-husain"Kesh, Deepanjan"https://www.zbmath.org/authors/?q=ai:kesh.deepanjanSummary: In this paper, we study two-bitprobe adaptive schemes storing five elements. For this class of schemes, the best known lower bound is \({\Omega}( m^{1 / 2})\)
due to [\textit{N. Alon} and \textit{U. Feige}, in: Proceedings of the 20th annual ACM-SIAM symposium on discrete algorithms, SODA'09. Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM); New York, NY: Association for Computing Machinery (ACM). 346--354 (2009; Zbl 1422.68040)].
Recently,
it was proved by the second author [``Space complexity of two adaptive bitprobe schemes storing three elements'', LIPIcs -- Leibniz Int. Proc. Inform. 122, Article No. 12, 12 p. (2018; \url{doi:10.4230/LIPIcs.FSTTCS.2018.12})]
that two-bitprobe adaptive schemes storing three elements will take at least \({\Omega}( m^{2 / 3})\) space, which also puts a lower bound on schemes storing five elements. In this work, we have improved the lower bound to \({\Omega}( m^{3 / 4})\). We also present a scheme for the same that takes \(\mathcal{O}( m^{5 / 6})\) space. This improves upon the \(\mathcal{O}( m^{18 / 19})\)-scheme
due to [\textit{M. Garg}, The bit-probe complexity of set membership. School of Technology and Computer Science, Tata Institute of Fundamental Research Mumbai (PhD Thesis) (2016)]
and the \(\mathcal{O}( m^{10 / 11})\)-scheme
due to [the authors et al., Lect. Notes Comput. Sci. 11355, 317--328 (2019; Zbl 1434.68119)].Recognizing the tractability in big data computing.https://www.zbmath.org/1453.681672021-02-27T13:50:00+00:00"Gao, Xiangyu"https://www.zbmath.org/authors/?q=ai:gao.xiangyu"Li, Jianzhong"https://www.zbmath.org/authors/?q=ai:li.jianzhong"Miao, Dongjing"https://www.zbmath.org/authors/?q=ai:miao.dongjing"Liu, Xianmin"https://www.zbmath.org/authors/?q=ai:liu.xianminSummary: Due to the limitation on computational power of existing computers, the polynomial time does not work for identifying the tractable problems in big data computing. This paper adopts the sublinear time as the new standard to recognize the tractability in big data computing and study tractable problems in terms of computational complexity theory. The random-access Turing machine is used as the computational model to characterize the problems that are tractable on big data. First, pure tractable class is proposed and two important classes within it, and , are studied. The structures of this two pure tractable classes are deeply investigated and they are proved \(\texttt{PPL}^i \subsetneq \texttt{PPL}^{i + 1}\), \(\texttt{PPL} \subsetneq \texttt{PT}\) and \(\texttt{PDP}^{k + 1} \subsetneq \texttt{PDP}^k \subsetneq \texttt{PT} \). Then, pseudo-tractable class is proposed and is partitioned into two classes, and , according to preprocessing techniques. The relations among pseudo-tractable classes and other complexity classes are investigated and they are proved that \(\texttt{PsT} \subseteq \texttt{P}\) and . Finally, we show that is closed under reduction.Hardness and efficiency on minimizing maximum distances in spanning trees.https://www.zbmath.org/1453.681312021-02-27T13:50:00+00:00"Couto, Fernanda"https://www.zbmath.org/authors/?q=ai:couto.fernanda"Cunha, Luís Felipe I."https://www.zbmath.org/authors/?q=ai:cunha.luis-felipe-ignacioSummary: The \(t\)-admissibility problem aims to decide whether a graph \(G\) has a spanning tree \(T\) in which the distance between any two adjacent vertices of \(G\) is at most \(t\). Regarding its optimization version, the smallest \(t\) for which \(G\) is \(t\)-admissible is the stretch index of \(G\), denoted by \(\sigma_T(G)\). The problem of deciding whether \(\sigma_T(G) \leq t\), \(t \geq 4\) is \(\mathsf{NP} \)-complete and polynomial-time solvable for \(t = 2\). However, deciding if \(t = 3\) is an open problem. We determine 3-admissible graph classes by studying graphs with few \(P_4\)'s, and we partially classify the \(\mathsf{P}\) vs \(\mathsf{NP}\)-complete dichotomy of the \(t\)-admissibility problem for \((k, \ell)\)-graphs. These graph classes generalize some others in which the computational complexity of the \(t\)-admissibility problem was already determined. Moreover, we determine the stretch index for cycle-power graphs and for \((2, 1)\)-chordal graphs, which are subclasses of \((k, \ell)\)-graphs and the \(t\)-admissibility problem is \(\mathsf{NP} \)-complete.Approximation algorithm for minimum weight connected-\(k\)-subgraph cover.https://www.zbmath.org/1453.681402021-02-27T13:50:00+00:00"Liu, Pengcheng"https://www.zbmath.org/authors/?q=ai:liu.pengcheng"Zhang, Zhao"https://www.zbmath.org/authors/?q=ai:zhang.zhao"Huang, Xiaohui"https://www.zbmath.org/authors/?q=ai:huang.xiaohuiSummary: For a given graph \(G\), the minimum weight connected-\(k\)-subgraph cover problem (MinWC \(k\) SC) is to find a minimum weight vertex subset \(C\) of \(G\) such that each connected subgraph of \(G\) on \(k\) vertices contains at least one vertex of \(C\). Previously, \textit{Y. Zhang} et al. [ibid. 535, 54--58 (2014; Zbl 1417.68167)]
presented a \((k - 1)\)-approximation algorithm for MinWC \(k\) SC under the assumption that the girth of \(G\), which is the length of a shortest cycle of \(G\), is at least \(k\). In this paper, we improve this result by showing that \((k - 1)\)-approximation can be achieved when the girth requirement is relaxed from \(k\) to \(2 k / 3\).Lower bounds for the happy coloring problems.https://www.zbmath.org/1453.681262021-02-27T13:50:00+00:00"Bliznets, Ivan"https://www.zbmath.org/authors/?q=ai:bliznets.ivan-a"Sagunov, Danil"https://www.zbmath.org/authors/?q=ai:sagunov.danilSummary: In this paper, we study the \textsc{Maximum Happy Vertices} and the \textsc{Maximum Happy Edges} problems (MHV and MHE for short). Very recently, the problems attracted a lot of attention and
were studied
in [\textit{A. Agrawal}, Lect. Notes Comput. Sci. 10765, 103--115 (2018; Zbl 06890111); \textit{N. R. Aravind} et al., Lect. Notes Comput. Sci. 9843, 281--292 (2016; Zbl 06631028);
\textit{J. Choudhari} and \textit{I. V. Reddy}, Lect. Notes Comput. Sci. 10755, 228--239 (2018; Zbl 06889933); \textit{N. Misra} and \textit{I. V. Reddy}, Lect. Notes Comput. Sci. 10765, 142--153 (2018; Zbl 06890114)].
Main focus of our work is lower bounds on the computational complexity of these problems. Established lower bounds can be divided into the following groups: NP-hardness of the above guarantee parameterization, kernelization lower bounds (answering questions of
Misra and Reddy [loc. cit.]),
exponential lower bounds under the \textsc{Set Cover Conjecture} and the \textsc{Exponential Time Hypothesis}, and inapproximability results. Moreover, we present an \(\mathcal{O}^\ast( \ell^k)\) randomized algorithm for MHV and an \(\mathcal{O}^\ast( 2^k)\) algorithm for MHE, where \(\ell\) is the number of colors used and \(k\) is the number of required happy vertices or edges. These algorithms cannot be improved to subexponential taking proved lower bounds into account.Sensitivity, affine transforms and quantum communication complexity.https://www.zbmath.org/1453.680842021-02-27T13:50:00+00:00"Dinesh, Krishnamoorthy"https://www.zbmath.org/authors/?q=ai:dinesh.krishnamoorthy"Sarma, Jayalal"https://www.zbmath.org/authors/?q=ai:sarma-m-n.jayalalSummary: In this paper, we study the Boolean function parameters sensitivity \(( \mathsf{s} )\), block sensitivity \(( \mathsf{bs} )\), and alternation \(( \mathsf{alt} )\) under specially designed affine transforms and show several applications. For a function \(f : \mathbb{F}_2^n \to \{- 1, 1 \} \), and \(A = M x + b\) for \(M \in \mathbb{F}_2^{n \times n}\) and \(b \in \mathbb{F}_2^n\), the result of the transformation \(g\) is defined as \(\forall x \in \mathbb{F}_2^n, g(x) = f(M x + b)\).
As a warm up, we study alternation under linear shifts (when \(M\) is restricted to be the identity matrix) called the \textit{shift invariant alternation} (the smallest alternation that can be achieved for the Boolean function \(f\) by shifts, denoted by \(\mathsf{salt}(f))\).
By a result of \textit{C. Lin} and \textit{S. Zhang} [LIPIcs -- Leibniz Int. Proc. Inform. 80, Article 51, 13 p. (2017; Zbl 1441.68053)],
it follows that \(\mathsf{bs}(f) \leq O( \mathsf{salt} ( f )^2 \mathsf{s}(f))\). Thus, to settle the Sensitivity Conjecture \(( \forall f, \mathsf{bs}(f) \leq \mathsf{poly}(\mathsf{s}(f)))\), it suffices to argue that \(\forall f, \mathsf{salt}(f) \leq \mathsf{poly}(\mathsf{s}(f))\). However, we exhibit an explicit family of Boolean functions for which \(\mathsf{salt}(f)\) is \(2^{{\Omega} ( \mathsf{s} ( f ) )} \). Going further, we use an affine transform \(A\), such that the corresponding function \(g\) satisfies \(\mathsf{bs}(f, 0^n) \leq \mathsf{s}(g)\). We apply this in the setting of quantum communication complexity to prove that for \(F(x, y) \overset{\text{def}}{=} f(x \wedge y)\), the bounded error quantum communication complexity of \(F\) with prior entanglement, \( Q_{1 / 3}^\ast(F)\) is \({\Omega}(\sqrt{ \mathsf{bs} ( f , 0^n )})\). Our proof builds on ideas from
[\textit{A. A. Sherstov}, Quantum Inf. Comput. 10, No. 5--6, 435--455 (2010; Zbl 1237.81040)]
where we use specific properties of the above affine transformation. Using this, we show the following.
\begin{itemize}
\item[(a)] For a fixed prime \(p\) and an \(\varepsilon , 0 < \epsilon < 1\), any Boolean function \(f\) that depends on all its inputs with \(\deg_p(f) \leq(1 - \epsilon) \log n\) must satisfy \(Q_{1 / 3}^\ast(F) = {\Omega} ( \frac{ n^{\epsilon / 2}}{ \log n} )\). Here, \( \deg_p(f)\) denotes the degree of the multilinear polynomial over \(\mathbb{F}_p\) which agrees with \(f\) on Boolean inputs.
\item[(b)] For Boolean function \(f\) such that there exists primes \(p\) and \(q\) with \(\deg_q(f) \geq {\Omega}( \deg_p ( f )^\delta)\) for \(\delta > 2\), the deterministic communication complexity -- \( \mathsf{D}(F)\) and \(Q_{1 / 3}^\ast(F)\) are polynomially related. In particular, this holds when \(\deg_p(f) = O(1)\). Thus, for this class of functions, this answers an open question
(see [\textit{H. Buhrmann} and \textit{R. de Wolf}, ``Communication complexity lower bounds by polynomials'', in: Proceedings of the 16th annual IEEE conference on computational complexity, CCC'01. Los Alamitos, CA: IEEE Computer Society. 120--130 (2001; \url{doi:10.1109/CCC.2001.933879})])
about the relation between the two measures.
\end{itemize}
Restricting back to the linear setting, we construct linear transformation \(A\), such that the corresponding function \(g\) satisfies, \( \mathsf{alt}(f) \leq 2 \mathsf{s}(g) + 1\). Using this new relation, we exhibit Boolean functions \(f\) (other than the parity function) such that \(\mathsf{s}(f)\) is \({\Omega}(\sqrt{ \mathsf{sparsity} ( f )})\) where \(\mathsf{sparsity}(f)\) is the number of non-zero coefficients in the Fourier representation of \(f\). This family of Boolean functions also rule out a potential approach to settle the XOR Log-Rank conjecture via the recently settled Sensitivity conjecture [\textit{H. Huang}, Ann. Math. (2) 190, No. 3, 949--955 (2019; Zbl 1427.05116)].Inductive logic programming. 27th international conference, ILP 2017, Orléans, France, September 4--6, 2017. Revised selected papers.https://www.zbmath.org/1453.680162021-02-27T13:50:00+00:00"Lachiche, Nicolas (ed.)"https://www.zbmath.org/authors/?q=ai:lachiche.nicolas"Vrain, Christel (ed.)"https://www.zbmath.org/authors/?q=ai:vrain.christelThe articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1369.68018].
Indexed articles:
\textit{Antanas, Laura; Dries, Anton; Moreno, Plinio; De Raedt, Luc}, Relational affordance learning for task-dependent robot grasping, 1-15 [Zbl 07307755]
\textit{Bekker, Jessa; Davis, Jesse}, Positive and unlabeled relational classification through label frequency estimation, 16-30 [Zbl 07307756]
\textit{Dai, Wang-Zhou; Muggleton, Stephen; Wen, Jing; Tamaddoni-Nezhad, Alireza; Zhou, Zhi-Hua}, Logical vision: one-shot meta-interpretive learning from real images, 46-62 [Zbl 07307758]
\textit{Dumančić, Sebastijan; Blockeel, Hendrik}, Demystifying relational latent representations, 63-77 [Zbl 07307759]
\textit{Katzouris, Nikos; Artikis, Alexander; Paliouras, Georgios}, Parallel online learning of event definitions, 78-93 [Zbl 07307760]
\textit{Kaur, Navdeep; Kunapuli, Gautam; Khot, Tushar; Kersting, Kristian; Cohen, William; Natarajan, Sriraam}, Relational restricted Boltzmann machines: a probabilistic logic learning approach, 94-111 [Zbl 07307761]
\textit{Ribeiro, Tony; Tourret, Sophie; Folschette, Maxime; Magnin, Morgan; Borzacchiello, Domenico; Chinesta, Francisco; Roux, Olivier; Inoue, Katsumi}, Inductive learning from state transitions over continuous domains, 124-139 [Zbl 07307763]
\textit{Šourek, Gustav; Svatoš, Martin; Železný, Filip; Schockaert, Steven; Kuželka, Ondřej}, Stacked structure learning for lifted relational neural networks, 140-151 [Zbl 07307764]
\textit{Svatoš, Martin; Šourek, Gustav; Železný, Filip; Schockaert, Steven; Kuželka, Ondřej}, Pruning hypothesis spaces using learned domain theories, 152-168 [Zbl 07307765]Efficient computation of derivatives for solving optimization problems in R and Python using SWIG-generated interfaces to ADOL-C.https://www.zbmath.org/1453.901092021-02-27T13:50:00+00:00"Kulshreshtha, K."https://www.zbmath.org/authors/?q=ai:kulshreshtha.kshitij"Narayanan, S. H. K."https://www.zbmath.org/authors/?q=ai:narayanan.sri-hari-krishna"Bessac, J."https://www.zbmath.org/authors/?q=ai:bessac.julie"MacIntyre, K."https://www.zbmath.org/authors/?q=ai:macintyre.kSummary: Scripting languages are gaining acceptance because of their ease of use and value for rapid prototyping in many fields, including machine learning and statistics. In the context of algorithmic differentiation, however, the main development effort continues to be concentrated on traditional compiled languages such as Fortran and C/C\(++\), whether source transformation tools or operator overloading tools. There is therefore a need for AD tools for computing derivatives efficiently within scripting languages. ADOL-C is an operator overloading-based C\(++\) library that provides accurate first- and higher order derivatives for applications in C\(++\). SWIG is a preprocessor that uses the C/C\(++\) header files to wrap the API of a library to be callable from scripting languages such as R and Python and several other high-level programming languages. Although every language has its caveats, the overall process of making the C/C\(++\) API available via SWIG is the same for all scripting languages. After an initial effort required per language, because SWIG is an automated interface generator based on the library's actual header files, only minimal effort is required to maintain the scripting interface in sync with upstream developments in the original C/C\(++\) library. In addition to achieving our original goal of creating an interface for R, we were able to generate an interface for Python that proved an order of magnitude faster than the previously implemented interface. This paper gives an overview of the interface generation process, the challenges we encountered with both scripting languages and some numerical results to demonstrate both usefulness and efficiency.Differentiating through conjugate gradient.https://www.zbmath.org/1453.650682021-02-27T13:50:00+00:00"Christianson, Bruce"https://www.zbmath.org/authors/?q=ai:christianson.bruceSummary: We show that, although the conjugate gradient (CG) algorithm has a singularity at the solution, it is possible to differentiate forward through the algorithm automatically by re-declaring all the variables as truncated Taylor series, the type of active variable widely used in automatic differentiation (AD) tools such as ADOL-C. If exact arithmetic is used, this approach gives a complete sequence of correct directional derivatives of the solution, to arbitrary order, in a single cycle of at most \(n\) iterations, where \(n\) is the number of dimensions. In the inexact case, the approach emphasizes the need for a means by which the programmer can communicate certain conditions involving derivative values directly to an AD tool.A benchmark of selected algorithmic differentiation tools on some problems in computer vision and machine learning.https://www.zbmath.org/1453.650502021-02-27T13:50:00+00:00"Srajer, Filip"https://www.zbmath.org/authors/?q=ai:srajer.filip"Kukelova, Zuzana"https://www.zbmath.org/authors/?q=ai:kukelova.zuzana"Fitzgibbon, Andrew"https://www.zbmath.org/authors/?q=ai:fitzgibbon.andrew-wSummary: Algorithmic differentiation (AD) allows exact computation of derivatives given only an implementation of an objective function. Although many AD tools are available, a proper and efficient implementation of AD methods is not straightforward. The existing tools are often too different to allow for a general test suite. In this paper, we compare 15 ways of computing derivatives including 11 automatic differentiation tools implementing various methods and written in various languages (C++, F\#, MATLAB, Julia and Python), 2 symbolic differentiation tools, finite differences and hand-derived computation.
We look at three objective functions from computer vision and machine learning. These objectives are for the most part simple, in the sense that no iterative loops are involved, and conditional statements are encapsulated in functions such as abs or logsumexp. However, it is important for the success of AD that such `simple' objective functions are handled efficiently, as so many problems in computer vision and machine learning are of this form.A usability case study of algorithmic differentiation tools on the ISSM ice sheet model.https://www.zbmath.org/1453.860492021-02-27T13:50:00+00:00"Hück, Alexander"https://www.zbmath.org/authors/?q=ai:huck.alexander"Bischof, Christian"https://www.zbmath.org/authors/?q=ai:bischof.christian-h"Sagebaum, Max"https://www.zbmath.org/authors/?q=ai:sagebaum.max"Gauger, Nicolas R."https://www.zbmath.org/authors/?q=ai:gauger.nicolas-r"Jurgelucks, Benjamin"https://www.zbmath.org/authors/?q=ai:jurgelucks.benjamin"Larour, Eric"https://www.zbmath.org/authors/?q=ai:larour.eric"Perez, Gilberto"https://www.zbmath.org/authors/?q=ai:perez.gilbertoSummary: Algorithmic differentiation (AD) based on operator overloading is often the only feasible approach for applying AD in complex C software environments. Challenges pertaining to the introduction of an AD tool based on operator overloading have been studied in the past. However, in order to assess possible performance gains or to verify derivative values, it is advantageous to be able to apply more than one AD tool to a given code. Hence, in this work, we investigate usability issues when exchanging AD tools. Our study is based on the NASA/JPL/UCI Ice Sheet System Model (ISSM) which currently employs the AD tool ADOL-C. We introduce CoDiPack to ISSM, a more recent AD tool offering a similar set of features while promising performance improvements. In addition to the obvious type change for the AD-augmented float type, this transition requires the change to a different adjoint MPI library, adaptation of the MUMPS solver wrapper, and changes to the derivative seeding and extraction routines. We believe that these issues are fairly generic for numerical simulation software, and the issues we report on provide a blueprint for similar undertakings. We also believe that our experiences provide guidance towards the development of AD interfaces that support AD tool interoperability. In addition, we improve upon the memory management of the existing ADOL-C instrumentation, which exhibited considerable runtime problems for higher mesh resolutions. We conduct serial and parallel ISSM model runs on a 2D mass transport benchmark as well as a model of the Pine Island Glacier to verify the derivatives computed by both tools and report on runtime performance and memory usage. In comparison, the CoDiPack AD variant of ISSM runs faster with less memory overhead than the ADOL-C variant and, thus, enables future model runs with an increased number of mesh elements. But the existence of two different AD implementations provides added confidence in the correctness of derivatives, in particular for future AD tool versions.Source-to-source adjoint algorithmic differentiation of an ice sheet model written in C.https://www.zbmath.org/1453.860482021-02-27T13:50:00+00:00"Hascoët, Laurent"https://www.zbmath.org/authors/?q=ai:hascoet.laurent"Morlighem, M."https://www.zbmath.org/authors/?q=ai:morlighem.mSummary: Algorithmic Differentiation (AD) has become a powerful tool to improve our understanding of the Earth System, because it can generate adjoint code which permits efficient calculation of gradients that are essential to sensitivity studies, inverse problems, parameter estimation and data assimilation. Most source-to-source transformation tools, however, have been designed for FORTRAN and support for C remains limited. Here we use the Adjoinable Land Ice Flow model (ALIF), a C clone of the C++ Ice Sheet System Model (ISSM) and employ source-to-source AD to produce its adjoint code. We present the first running source-to-source adjoint of ALIF, and its application to basal drag inversion under Pine Island Glacier, West Antarctica. ALIF brought several challenges to AD tool development, such as the correct treatment of the context code, which does not compute the differentiable function, but controls this computation through the setup of data structures, including possible aliasing, as well as data-flow reversal in the presence of pointers and dynamic memory, which are ubiquitous in codes such as ISSM and ALIF. We present the strategies we have developed to overcome these challenges.Neural network solution of pantograph type differential equations.https://www.zbmath.org/1453.651652021-02-27T13:50:00+00:00"Hou, Chih-Chun"https://www.zbmath.org/authors/?q=ai:hou.chih-chun"Simos, Theodore E."https://www.zbmath.org/authors/?q=ai:simos.theodore-e"Famelis, Ioannis Th."https://www.zbmath.org/authors/?q=ai:famelis.ioannis-thSummary: We investigate the approximate solution of pantograph type functional differential equations using neural networks. The methodology is based on the ideas of Lagaris et al., and it is applied to various problems with a proportional delay term subject to initial or boundary conditions. The proposed methodology proves to be very efficient.Adaptive edge weighting for graph-based learning algorithms.https://www.zbmath.org/1453.681522021-02-27T13:50:00+00:00"Karasuyama, Masayuki"https://www.zbmath.org/authors/?q=ai:karasuyama.masayuki"Mamitsuka, Hiroshi"https://www.zbmath.org/authors/?q=ai:mamitsuka.hiroshiSummary: Graph-based learning algorithms including label propagation and spectral clustering are known as the effective state-of-the-art algorithms for a variety of tasks in machine learning applications. Given input data, i.e. feature vectors, graph-based methods typically proceed with the following three steps: (1) generating graph edges, (2) estimating edge weights and (3) running a graph based algorithm. The first and second steps are difficult, especially when there are only a few (or no) labeled instances, while they are important because the performance of graph-based methods heavily depends on the quality of the input graph. For the second step of the three-step procedure, we propose a new method, which optimizes edge weights through a local linear reconstruction error minimization under a constraint that edges are parameterized by a similarity function of node pairs. As a result our generated graph can capture the manifold structure of the input data, where each edge represents similarity of
each node pair. To further justify this approach, we also provide analytical considerations for our formulation such as an interpretation as a cross-validation of a propagation model in the feature space, and an error analysis based on a low dimensional manifold model. Experimental results demonstrated the effectiveness of our adaptive edge weighting strategy both in synthetic and real datasets.Active learning of Bayesian linear models with high-dimensional binary features by parameter confidence-region estimation.https://www.zbmath.org/1453.922302021-02-27T13:50:00+00:00"Inatsu, Yu"https://www.zbmath.org/authors/?q=ai:inatsu.yu"Karasuyama, Masayuki"https://www.zbmath.org/authors/?q=ai:karasuyama.masayuki"Inoue, Keiichi"https://www.zbmath.org/authors/?q=ai:inoue.keiichi"Kandori, Hideki"https://www.zbmath.org/authors/?q=ai:kandori.hideki"Takeuchi, Ichiro"https://www.zbmath.org/authors/?q=ai:takeuchi.ichiroSummary: In this letter, we study an active learning problem for maximizing an unknown linear function with high-dimensional binary features. This problem is notoriously complex but arises in many important contexts. When the sampling budget, that is, the number of possible function evaluations, is smaller than the number of dimensions, it tends to be impossible to identify all of the optimal binary features. Therefore, in practice, only a small number of such features are considered, with the majority kept fixed at certain default values, which we call the working set heuristic. The main contribution of this letter is to formally study the working set heuristic and present a suite of theoretically robust algorithms for more efficient use of the sampling budget. Technically, we introduce a novel method for estimating the confidence regions of model parameters that is tailored to active learning with high-dimensional binary features. We provide a rigorous theoretical analysis of these algorithms and prove that a commonly used working set heuristic can identify optimal binary features with favorable sample complexity. We explore the performance of the proposed approach through numerical simulations and an application to a functional protein design problem.Erratum to: ``Computational complexity theory and the philosophy of mathematics''.https://www.zbmath.org/1453.680852021-02-27T13:50:00+00:00"Dean, Walter"https://www.zbmath.org/authors/?q=ai:dean.walterErratum to the author's paper [ibid. 27, No. 3, 381--439 (2019; Zbl 1451.68128)].On the sizes of DPDAs, PDAs, LBAs.https://www.zbmath.org/1453.680992021-02-27T13:50:00+00:00"Beigel, Richard"https://www.zbmath.org/authors/?q=ai:beigel.richard"Gasarch, William"https://www.zbmath.org/authors/?q=ai:gasarch.william-iSummary: There are languages \(A\) such that there is a Pushdown Automata (PDA) that recognizes \(A\) which is much smaller than any Deterministic Pushdown Automata (DPDA) that recognizes \(A\). There are languages \(A\) such that there is a Linear Bounded Automata (Linear Space Turing Machine, henceforth LBA) that recognizes \(A\) which is much smaller than any PDA that recognizes \(A\). There are languages \(A\) such that both \(A\) and \(\overline{A}\) are recognizable by a PDA, but the PDA for \(A\) is much smaller than the PDA for \(\overline{A}\). There are languages \(A_1\), \(A_2\) such that \(A_1\), \(A_2\), \(A_1 \cap A_2\) are recognizable by a PDA, but the PDA for \(A_1\) and \(A_2\) are much smaller than the PDA for \(A_1 \cap A_2\). We investigate these phenomena and show that, in all these cases, the size difference is captured by a function whose Turing degree is on the second level of the arithmetic hierarchy.
Our theorems lead to infinitely-many-\(n\) results. For example: for-infinitely-many-\(n\) there exists a language \(A_n\) recognized by a DPDA such that there is a small PDA for \(A_n\), but any DPDA for \(A_n\) is very large. We look at cases where we can get all-but-a-finite-number-of-\(n\) results, though with much smaller size differences.Conflict free version of covering problems on graphs: classical and parameterized.https://www.zbmath.org/1453.681362021-02-27T13:50:00+00:00"Jain, Pallavi"https://www.zbmath.org/authors/?q=ai:jain.pallavi"Kanesh, Lawqueen"https://www.zbmath.org/authors/?q=ai:kanesh.lawqueen"Misra, Pranabendu"https://www.zbmath.org/authors/?q=ai:misra.pranabenduSummary: Let \(\Pi\) be a family of graphs. In the classical \(\Pi\)-Vertex Deletion problem, given a graph \(G\) and a positive integer \(k\), the objective is to check whether there exists a subset \(S\) of at most \(k\) vertices such that \(G - S\) is in \(\Pi\). In this paper, we introduce the conflict free version of this classical problem, namely Conflict Free \(\Pi\)-Vertex Deletion (CF-\(\Pi\)-VD), and study this problem from the viewpoint of classical and parameterized complexity. In the CF-\(\Pi\)-VD problem, given two graphs \(G\) and \(H\) on the same vertex set and a positive integer \(k\), the objective is to determine whether there exists a set \(S \subseteq V(G)\), of size at most \(k\), such that \(G - S\) is in \(\Pi\) and \(H[S]\) is edgeless. Initiating a systematic study of these problems is one of the main conceptual contribution of this work. We obtain several results on the conflict free versions of several classical problems. Our first result shows that if \(\Pi\) is characterized by a finite family of forbidden induced subgraphs then CF-\(\Pi\)-VD is Fixed Parameter Tractable (\(\mathsf{FPT}\)). Furthermore, we obtain improved algorithms for conflict free versions of several well studied problems. Next, we show that if \(\Pi\) is characterized by a ``well-behaved'' infinite family of forbidden induced subgraphs, then CF-\(\Pi\)-VD is W[1]-hard. Motivated by this hardness result, we consider the parameterized complexity of CF-\(\Pi\)-VD when \(H\) is restricted to well studied families of graphs. In particular, we show that the conflict free version of several well-known problems such as Feedback Vertex Set, Odd Cycle Transversal, Chordal Vertex Deletion and Interval Vertex Deletion are \(\mathsf{FPT}\) when \(H\) belongs to the families of \(d\)-degenerate graphs and nowhere dense graphs.The simplest multi-key linearly homomorphic signature scheme.https://www.zbmath.org/1453.941292021-02-27T13:50:00+00:00"Aranha, Diego F."https://www.zbmath.org/authors/?q=ai:aranha.diego-f"Pagnin, Elena"https://www.zbmath.org/authors/?q=ai:pagnin.elenaSummary: We consider the problem of outsourcing computation on data authenticated by different users. Our aim is to describe and implement the simplest possible solution to provide data integrity in cloud-based scenarios. Concretely, our multi-key linearly homomorphic signature scheme \(({\mathsf{mklhs}})\) allows users to upload signed data on a server, and at any later point in time any third party can query the server to compute a linear combination of data authenticated by different users and check the correctness of the returned result. Our construction generalizes \textit{D. Boneh} et al.'s linearly homomorphic signature scheme [Lect. Notes Comput. Sci. 5443, 68--87 (2009; Zbl 1227.68024)] to the multi-key setting and relies on basic tools of pairing-based cryptography. Compared to existing multi-key homomorphic signature schemes, our \({\mathsf{mklhs}}\) is a conceptually simple and elegant direct construction, which trades-off privacy for efficiency. The simplicity of our approach leads us to a very efficient construction that enjoys significantly shorter signatures and higher performance than previous proposals. Finally, we implement \({\mathsf{mklhs}}\) using two different pairing-friendly curves at the 128-bit security level, a Barreto-Lynn-Scott curve and a Barreto-Naehrig curve. Our benchmarks illustrate interesting performance trade-offs between these parameters, involving the cost of exponentiation and hashing in pairing groups. We provide a discussion on such trade-offs that can be useful to other implementers of pairing-based protocols.
For the entire collection see [Zbl 1422.94005].Assessing the impact of HIV treatment interruptions using stochastic cellular automata.https://www.zbmath.org/1453.921382021-02-27T13:50:00+00:00"Hillmann, Andreas"https://www.zbmath.org/authors/?q=ai:hillmann.andreas"Crane, Martin"https://www.zbmath.org/authors/?q=ai:crane.martin"Ruskin, Heather J."https://www.zbmath.org/authors/?q=ai:ruskin.heather-jSummary: Chronic HIV infection causes a progressive decrease in the ability to maintain homeostasis resulting, after some time, in eventual break down of immune functions. Recent clinical research has shed light on a significant contribution of the lymphatic tissues, where HIV causes accumulation of collagen, (fibrosis). Specifically, where tissue is populated by certain types of functional stromal cells designated fibroblastic reticular cells (FRCs), these have been found to play a crucial role in balancing out apoptosis and regeneration of naïve T-cells through 2-way cellular signaling. Tissue fibrosis not only impedes this signaling, effectively reducing T-cell levels through increased apoptosis of cells of both T- and FRC type but has been found to be \textit{irreversible} by current HIV standard treatment (cART). While the therapy aims to block the viral lifecycle, cART-associated increase of T-cell levels in blood appears to conceal existing FRC impairment through fibrosis. This hidden impairment can lead to adverse consequences if treatment is interrupted, e.g. due to poor adherence (missing doses) or through periods recovering from drug toxicities. Formal clinical studies on treatment interruption have indicated possible adverse effects, but quantification of those effects in relation to interruption protocol and patient predisposition remains unclear. Accordingly, the impact of treatment interruption on lymphatic tissue structure and T-cell levels is explored here by means of computer simulation. A novel stochastic cellular automata model is proposed, which utilizes all sources of clinical detail available to us (though sparse in part) for model parametrization. Sources are explicitly referenced and conflicting evidence from previous studies explored. The main focus is on (i) spatial aspects of collagen build up, together with (ii) collagen increase after repeated treatment interruptions to explore the dynamics of HIV-induced fibrosis and T-cell loss.Coercing machine learning to output physically accurate results.https://www.zbmath.org/1453.681642021-02-27T13:50:00+00:00"Geng, Zhenglin"https://www.zbmath.org/authors/?q=ai:geng.zhenglin"Johnson, Daniel"https://www.zbmath.org/authors/?q=ai:johnson.daniel-cowan|johnson.daniel-d|johnson.daniel-h|johnson.daniel-p"Fedkiw, Ronald"https://www.zbmath.org/authors/?q=ai:fedkiw.ronald-pSummary: Many machine/deep learning artificial neural networks are trained to simply be interpolation functions that map input variables to output values interpolated from the training data in a linear/nonlinear fashion. Even when the input/output pairs of the training data are physically accurate (e.g. the results of an experiment or numerical simulation), interpolated quantities can deviate quite far from being physically accurate. Although one could project the output of a network into a physically feasible region, such a postprocess is not captured by the energy function minimized when training the network; thus, the final projected result could incorrectly deviate quite far from the training data. We propose folding any such projection or postprocess directly into the network so that the final result is correctly compared to the training data by the energy function. Although we propose a general approach, we illustrate its efficacy on a specific convolutional neural network that takes in human pose parameters (joint rotations) and outputs a prediction of vertex positions representing a triangulated cloth mesh. While the original network outputs vertex positions with erroneously high stretching and compression energies, the new network trained with our physics ``prior'' remedies these issues producing highly improved results.Breaking randomized mixed-radix scalar multiplication algorithms.https://www.zbmath.org/1453.940742021-02-27T13:50:00+00:00"Detrey, Jérémie"https://www.zbmath.org/authors/?q=ai:detrey.jeremie"Imbert, Laurent"https://www.zbmath.org/authors/?q=ai:imbert.laurentSummary: In this paper we present a novel, powerful attack on a recently introduced randomized scalar multiplication algorithm based on covering systems of congruences. Our attack can recover the whole key with very few traces, even when those only provide partial information on the sequence of operations. In an attempt to solve the issues raised by the broken algorithm, we designed a constant-time version with no secret dependent branching nor memory access based on the so-called mixed-radix number system. We eventually present our conclusions regarding the use of mixed-radix representations as a randomization setting.
For the entire collection see [Zbl 1422.94005].Bag similarity network for deep multi-instance learning.https://www.zbmath.org/1453.681662021-02-27T13:50:00+00:00"Wang, Xinggang"https://www.zbmath.org/authors/?q=ai:wang.xinggang"Yan, Yongluan"https://www.zbmath.org/authors/?q=ai:yan.yongluan"Tang, Peng"https://www.zbmath.org/authors/?q=ai:tang.peng"Liu, Wenyu"https://www.zbmath.org/authors/?q=ai:liu.wenyu"Guo, Xiaojie"https://www.zbmath.org/authors/?q=ai:guo.xiaojieSummary: The effectiveness of multi-instance learning (MIL) has been demonstrated by its wide spectrum of applications in computer vision, biometrics, and natural language processing. Recently, solving MIL problems using deep neural networks has proven to be highly effective. However, in current multi-instance neural networks, the feature representation of each bag is learned individually, and the relations between bags are not considered. In this study, we propose a novel neural network for MIL that emphasizes modeling the affinities between bags. It achieves a more effective bag representation than previous methods. Specifically, a bag with multiple instances is modeled by its similarity to other bags, and the similarity calculation is carried out in a novel neural network, termed the bag similarity network (BSN). Training the BSN involves two representation learning problems: instance feature learning and bag similarity learning. To avoid the complex interdependence of these problems, we decouple the BSN training process by first training an instance feature learning network, and then construct a bag similarity network, each of which is optimized end-to-end by back-propagation. Experiments are conducted to demonstrate clearly the advantage of the proposed method over other state-of-the-art methods on various MIL datasets.An uncertain possibility-probability information fusion method under interval type-2 fuzzy environment and its application in stock selection.https://www.zbmath.org/1453.681842021-02-27T13:50:00+00:00"Sang, Xiuzhi"https://www.zbmath.org/authors/?q=ai:sang.xiuzhi"Zhou, Yingheng"https://www.zbmath.org/authors/?q=ai:zhou.yingheng"Yu, Xianyu"https://www.zbmath.org/authors/?q=ai:yu.xianyuSummary: Stock selection is one of the most significant topic in investment decision problems, which can be solved through aggregating financial variables of the enterprizes. Because of the complexity and ambiguity of the stock exchange market, crisp financial ratios and their corresponding probabilities could not represent the fluctuation and ambiguity over a period. Interval type-2 fuzzy numbers have advantages in modeling uncertainty over type-1 fuzzy sets (T1 FSs), and have gained popularity in investment decisions. In this paper, we propose an uncertain possibility-probability information fusion method for stock selection under interval type-2 fuzzy environment. Firstly, based on the expected value and variance of the evaluations, both the interval type-2 fuzzy evaluations and probabilities are constructed. Secondly, considering the idea of probability for a fuzzy event, the fused probability with possibility-probability information is suggested to handle the uncertainty under interval type-2 fuzzy environment, and the entropy weight is used to reflect the importance degree for each criteria. Thirdly, the aggregation model based on the concept of the relative closeness to the ideal alternatives is used to compare the alternatives. Finally, an application have been performed in stock selection of real estate industry in China.Correlated utility-based pattern mining.https://www.zbmath.org/1453.681502021-02-27T13:50:00+00:00"Gan, Wensheng"https://www.zbmath.org/authors/?q=ai:gan.wensheng"Lin, Jerry Chun-Wei"https://www.zbmath.org/authors/?q=ai:lin.jerry-chun-wei"Chao, Han-Chieh"https://www.zbmath.org/authors/?q=ai:chao.hanchieh"Fujita, Hamido"https://www.zbmath.org/authors/?q=ai:fujita.hamido"Yu, Philip S."https://www.zbmath.org/authors/?q=ai:yu.philip-sSummary: Recently, a new research field called utility-oriented mining has attracted great attention. However, previous studies have shown a limitation in that they rarely consider the inherent correlation of items among patterns. For example, considering the purchase behaviors of consumers, a high-utility group of products (w.r.t. multi-products) may contain several very high-utility products with some low-utility products. However, it is considered to be a valuable pattern even if this behavior/pattern may not be highly correlated, or even if it happens by chance. In light of these challenges, we propose an efficient utility-mining approach, called non-redundant \textbf{\underline{Co}}rrelated high-\textbf{\underline{U}}tility \textbf{\underline{P}}attern \textbf{\underline{M}}iner (CoUPM) by considering the positive correlation and profitable value. The derived patterns with high utility and strong positive correlation can lead to more insightful availability than those patterns that only have high profitable values. The utility-list structure is revised and applied to store the necessary information of both correlation and utility. Several pruning strategies are further developed to improve the efficiency for discovering the desired patterns. Experimental results showed that the non-redundant correlated high-utility patterns have more effectiveness than some other kinds of interesting patterns. Moreover, the efficiency of the proposed CoUPM algorithm significantly outperformed the state-of-the-art algorithm.An instance voting approach to feature selection.https://www.zbmath.org/1453.681482021-02-27T13:50:00+00:00"Chamakura, Lily"https://www.zbmath.org/authors/?q=ai:chamakura.lily"Saha, Goutam"https://www.zbmath.org/authors/?q=ai:saha.goutam-kumarSummary: In this work, we address the problem of supervised feature selection (FS) for high-dimensional datasets with a small number of instances. Here, we propose a novel heuristic FS approach, Conditional Priority Coverage Maximization (CPCM) which seeks to leverage the local information provided by the small set of instances. We define the vote assigned by an instance to a feature as the local relevance of the latter. Also, we show that the proposed voting scheme is asymptotically related to the Bayes' decision rule for minimum risk classification. Next, we exploit the instance votes for feature selection by posing it as a set-covering problem -- we seek to select a subset of features such that they can together cover the instances. This approach avoids the selection of redundant features, while selecting relevant ones. In addition, we formulate the stopping criterion to select a compact subset of features. Through experiments on synthetic and real datasets, we demonstrated that CPCM outperforms other graph based FS techniques and state-of-the-art FS approaches employing mutual information (MI). Further, we evaluated the stability of CPCM to minor variations in the training data and found it to be reasonably robust.An effective method to determine whether a point is within a convex hull and its generalized convex polyhedron classifier.https://www.zbmath.org/1453.681542021-02-27T13:50:00+00:00"Leng, Qiangkui"https://www.zbmath.org/authors/?q=ai:leng.qiangkui"Wang, Shurui"https://www.zbmath.org/authors/?q=ai:wang.shurui"Qin, Yuping"https://www.zbmath.org/authors/?q=ai:qin.yuping"Li, Yujian"https://www.zbmath.org/authors/?q=ai:li.yujianSummary: A convex polyhedron classifier that encloses the minority class using a combination of hyperplanes is potentially effective in imbalanced classification. To construct an easy-to-use convex polyhedron classifier, this paper first presents a theoretical foundation for determining whether a point is within the convex hull of a finite point set. This foundation corresponds to a geometric method in which the result is expressed as a separating hyperplane. If the given point and the given convex hull are located on either side of the learned hyperplane, this indicates that the point is outside of the convex hull. Otherwise, the conclusion that the point is within the convex hull can be obtained. As a generalization of the geometric method, a convex polyhedron classifier is further proposed for binary classification. If two finite point sets are polyhedrally separable, a series of hyperplanes can be learned as a combined (piecewise linear) classifier, which surrounds a point set that is inside using a convex polyhedron and excludes the other point set that is outside. Experimental results on twelve real-world datasets show that the proposed classifier is generally better than the other two piecewise linear classifiers. Moreover, a comparison with several types of support vector machines confirms its competitiveness.An effective refined artificial bee colony algorithm for numerical optimisation.https://www.zbmath.org/1453.901872021-02-27T13:50:00+00:00"Bajer, Dražen"https://www.zbmath.org/authors/?q=ai:bajer.drazen"Zorić, Bruno"https://www.zbmath.org/authors/?q=ai:zoric.brunoSummary: Various complex problems have recently encouraged research and development of different bio-inspired optimisation algorithms, a well-known instance being the artificial bee colony (ABC) algorithm, both due to its simplicity and performance. Building upon the basic algorithm enabled further gains in performance but brought alongside it some specific costs and problems. The improved variants available in the literature often introduce additional user-defined parameters and sometimes completely infringe the algorithm structure. Focusing the search process on exploitation has proven to be a good first step of improvement in most cases, but analysing the effects of this modification on a limited set of standard benchmark functions could lead to a skewed perspective. This paper proposes a novel algorithm based on ABC that keeps the original structure intact, introduces a new solution update equation and an extended scout bee phase focusing the search on more prominent solutions without introducing new control parameters. Based on the conducted experimental analysis, it is able to outperform various competitive algorithms on a large test bed of benchmark functions and several real-world problems. The effects of the particular proposed modifications are also analysed and attention is given to two variants of the standard algorithm.A key-sharing based secure deduplication scheme in cloud storage.https://www.zbmath.org/1453.680702021-02-27T13:50:00+00:00"Wang, Liang"https://www.zbmath.org/authors/?q=ai:wang.liang"Wang, Baocang"https://www.zbmath.org/authors/?q=ai:wang.baocang"Song, Wei"https://www.zbmath.org/authors/?q=ai:song.wei"Zhang, Zhili"https://www.zbmath.org/authors/?q=ai:zhang.zhiliSummary: The data deduplication technique can efficiently eliminate redundant data by keeping only one copy of the duplicate data. Convergent encryption (CE) has been widely used in secure deduplication to save storage space and reduce data upload bandwidth, but it still faces two problems. One is that CE is not semantically secure, and suffers from an offline brute-force attack when the data is selected from a predictable set. Another is the convergent key (CK) management problem. CE requires each user holds an independent master key to encrypt its CK then stores them in the cloud, thus different users would store the same key for the duplicate copies. As the scale of users and data expand, the number of CK increase linearly. Therefore, so many keys stored repeatedly is a type of redundancy and brings about a key management issue. To enhance the security of CE, current schemes usually interact with a third party to generate a CK, but this brings an additional burden to the system. Recently, several schemes have been proposed for efficient CK management, but these schemes are confronted with a heavy computation and communication overhead and cannot resist the collusion attack. To deal with the above two problems, we propose a key-sharing method based on proof of ownership for secure deduplication. In the new scheme, only the initial uploader of the data owner encrypts the data with a randomly-chosen CK and then distributes the CK in the cloud, and only the users possessing the claimed data can retrieve the CK. The CK only needs to store once for a single duplicate data. Furthermore, our scheme adopts a deduplication check on the plaintexts and the consistency policy, and only a few owners need to encrypt the duplicate data. Analysis shows that our scheme is more efficient and remains secure in the proposed security model.TPNE: topology preserving network embedding.https://www.zbmath.org/1453.681442021-02-27T13:50:00+00:00"Xie, Yu"https://www.zbmath.org/authors/?q=ai:xie.yu"Gong, Maoguo"https://www.zbmath.org/authors/?q=ai:gong.maoguo"Qin, A. K."https://www.zbmath.org/authors/?q=ai:qin.a-k"Tang, Zedong"https://www.zbmath.org/authors/?q=ai:tang.zedong"Fan, Xiaolong"https://www.zbmath.org/authors/?q=ai:fan.xiaolongSummary: Network embedding aims at mapping nodes into a vectorial feature space while maximally preserving the topological relations of nodes in a network so as to facilitate complex network analysis. The existing works in network embedding are focused on preserving local or global topological information within limited step-sizes, which could be insufficient in many applications, even the underlying network is undirected and unweighted. The complex and rich topological information in networks exerts paramount impacts on the formation of networks and can reveal the high-order relevance among different nodes. In this paper, we propose a novel network embedding framework based on deep neural networks, named Topology Preserving Network Embedding (TPNE), which is suitable for arbitrary types of information networks: directed or undirected, weighted or unweighted. In this framework, we devise a closeness matrix to capture more comprehensive global topology of the network, and combine both global closeness reconstruction and local neighborhood preserving into a single loss function. To justify our model, we conduct extensive experiments on node classification, link prediction and node visualization tasks, employing the learned embedding vectors. It demonstrates that the proposed approach has promising performance and it outperforms the existing state-of-the-art approaches on several networks in such tasks.Some notes on optimal fuzzy reasoning methods.https://www.zbmath.org/1453.681822021-02-27T13:50:00+00:00"Li, Yingfang"https://www.zbmath.org/authors/?q=ai:li.yingfang"He, Xingxing"https://www.zbmath.org/authors/?q=ai:he.xingxing"Qin, Keyun"https://www.zbmath.org/authors/?q=ai:qin.keyun"Meng, Dan"https://www.zbmath.org/authors/?q=ai:meng.danSummary: The optimal fuzzy reasoning (OFR) method proposed by \textit{L. Zhang} and \textit{K.-Y. Cai} [Int. J. Intell. Syst. 19, No. 11, 1033--1049 (2004; Zbl 1101.68889)] is treated as a control problem and it is dramatically different from other existing fuzzy reasoning methods. The aim of this paper is to make some notes on OFR methods. We give a modification of an OFR method presented by Zhang and Cai. Some advantages and limitations of OFR methods are examined. We introduce several new OFR methods based on the Łukasiewicz implication, the Kleene-Dienes implication and the Zadeh implication. The reasoning consistency of the proposed methods is studied. The relationship between the OFR method and other fuzzy reasoning methods (e.g., the compositional rule of inference method and the pointwise optimizing fuzzy inference method) is examined.Labeled graph sketches: keeping up with real-time graph streams.https://www.zbmath.org/1453.680652021-02-27T13:50:00+00:00"Song, Chunyao"https://www.zbmath.org/authors/?q=ai:song.chunyao"Ge, Tingjian"https://www.zbmath.org/authors/?q=ai:ge.tingjian"Ge, Yao"https://www.zbmath.org/authors/?q=ai:ge.yao"Zhang, Haowen"https://www.zbmath.org/authors/?q=ai:zhang.haowen"Yuan, Xiaojie"https://www.zbmath.org/authors/?q=ai:yuan.xiaojieSummary: Currently, graphs serve as fundamental data structures for many applications, such as road networks, social and communication networks, and web requests. In many applications, graph edges stream in and users are only interested in the recent data. In data exploration, the storage and processing of such massive amounts of graph stream data has become a significant problem. As the categorical attributes of vertices and edges are often referred to as labels, we propose a labeled graph sketch that stores real-time graph structural information using only sublinear space and that supports graph queries of diverse types. This sketch also works for sliding-window queries. We conduct extensive experiments on real-world datasets in six different domains and compare the results with a state-of-the-art method to show the accuracy, efficiency, and practicability of our proposed approach.A parameter-free particle swarm optimization algorithm using performance classifiers.https://www.zbmath.org/1453.682322021-02-27T13:50:00+00:00"Harrison, Kyle Robert"https://www.zbmath.org/authors/?q=ai:harrison.kyle-robert"Ombuki-Berman, Beatrice M."https://www.zbmath.org/authors/?q=ai:ombuki-berman.beatrice-m"Engelbrecht, Andries P."https://www.zbmath.org/authors/?q=ai:engelbrecht.andries-petrusSummary: This paper presents an investigation into the short-term versus long-term performance of various particle swarm optimization (PSO) control parameter configurations. While evidence suggests that the best PSO parameter values to employ are time-dependent, this paper provides an in-depth examination of a small set of parameter values to provide a more concrete quantification of the performance degradation observed with specific control parameter configurations over time. Given that the short-term performance is not necessarily indicative of long-term performance, this paper proposes that machine learning techniques be used to build predictive models based on two easily-observable landscape characteristics. Finally, using the predictive models as a basis, this paper also proposes a parameter-free PSO algorithm, which performs on par with other top-performing PSO variants, namely the three best performing static PSO configurations, particle swarm optimization with time-varying acceleration coefficients (PSO-TVAC), and particle swarm optimization with improved random constants (PSO-iRC).The optimal upper bound of the number of queries for Laplace mechanism under differential privacy.https://www.zbmath.org/1453.680722021-02-27T13:50:00+00:00"Li, Xiaoguang"https://www.zbmath.org/authors/?q=ai:li.xiaoguang"Li, Hui"https://www.zbmath.org/authors/?q=ai:li.hui.1|li.hui|li.hui.4|li.hui.2|li.hui.5|li.hui.3"Zhu, Hui"https://www.zbmath.org/authors/?q=ai:zhu.hui"Huang, Muyang"https://www.zbmath.org/authors/?q=ai:huang.muyangSummary: Differential privacy is a state-of-the-art technology for privacy preserving, and Laplace mechanism is a simple and powerful tool to realize differential privacy. However, there is an obvious flaw in differential privacy, which is each query function can only be executed finite times for the reason that adversary can recover the real query result if he executes the same query function many times. Unfortunately, how to set the upper bound for the number of linear queries is still an issue. In this paper, we focus on the linear query function in Laplace-based mechanisms, and we propose a method to set the upper bound for the number of linear queries from the perspective of information theory. The main idea is, firstly we find the most aggressive linear query that leaks the maximum information about the dataset to adversaries, and we set the upper bound of the number of queries so that even if the most aggressive linear query cannot leak the whole self-information about any individual to the adversary. On the other hand, the number of queries is also influenced by the type of dataset (continuous and discrete). In this paper, we also discuss the different upper bound for the number of queries for continuous datasets and discrete datasets. Finally, we conduct the experiments on the continuous dataset and the discrete dataset to prove our result.Stabilizing the information granules formed by the principle of justifiable granularity.https://www.zbmath.org/1453.681892021-02-27T13:50:00+00:00"Zhongjie, Zhang"https://www.zbmath.org/authors/?q=ai:zhongjie.zhang"Jian, Huang"https://www.zbmath.org/authors/?q=ai:jian.huangSummary: The principle of justifiable granularity (PoJG) balances coverage and specificity to optimize information granularity. Although numerous studies have successfully formed information granules (IGs) using this principle, the stability of these IGs rarely receives attention. This paper analyzes the stability of such an IG's modal, performance and upper or lower bound. First, we define some concepts to quantify the stability. Then, by the use of the binomial distribution, the central limit theorem and the union bound, we prove some theorems, which rely on several reasonable hypotheses and reveal the relations between the data size and the stability of the IG's modal, performance and upper or lower bound. Furthermore, we put forward an algorithm built on the theorems to generate stable IGs by building data that have the proper size. Finally, we analyze its time complexity, applications and limitations. Experiments indicate the reliability of this algorithm when it is applied to several probability distributions and real datasets with a large scale of evidence.A multi-population evolutionary algorithm with single-objective guide for many-objective optimization.https://www.zbmath.org/1453.901472021-02-27T13:50:00+00:00"Liu, Haitao"https://www.zbmath.org/authors/?q=ai:liu.haitao"Du, Wei"https://www.zbmath.org/authors/?q=ai:du.wei"Guo, Zhaoxia"https://www.zbmath.org/authors/?q=ai:guo.zhaoxiaSummary: This paper develops a multi-population evolutionary algorithm with single-objective guide to tackle many-objective optimization problems. It exploits the merits of both multiple populations and single-objective optimization to balance diversity and convergence of the evolution process. Specifically, the single-objective guide process helps to construct the better ideal point and reference points. A novel objective space partitioning mechanism is developed to transform a many-objective optimization problem into multiple subproblems, each of which is tackled by a subpopulation. A novel information sharing mechanism between subpopulations is proposed to balance diversity and convergence. Finally the subpopulations are merged together and deal with many-objective optimization problems to further enhance the convergence. We have compared the performance of the proposed algorithm with nine state-of-the-art algorithms on 85 test instances of 21 benchmark problems with up to 15 objectives. Experimental results show that the proposed algorithm has the superior performance in solving multi- and many-objective optimization problems.Tree-based space partition and merging ensemble learning framework for imbalanced problems.https://www.zbmath.org/1453.681622021-02-27T13:50:00+00:00"Zhu, Zonghai"https://www.zbmath.org/authors/?q=ai:zhu.zonghai"Wang, Zhe"https://www.zbmath.org/authors/?q=ai:wang.zhe"Li, Dongdong"https://www.zbmath.org/authors/?q=ai:li.dongdong"Du, Wenli"https://www.zbmath.org/authors/?q=ai:du.wenliSummary: Dealing with imbalanced problems is a significant challenge in machine learning, especially when the data set exhibits an irregular distribution. To this end, this paper proposes a tree-based space partition and merging ensemble learning framework known as the space partition tree (SPT), to partition the data space into two sub-spaces recursively. The partition hyperplane partitions the current space according to the maximum scatter direction of the majority set in the current space. When the partitioned sub-space satisfies the termination conditions, the sub-space is regarded as a decision space, and the decision region of the minority and majority classes is learned in this decision space. By merging the decision regions in all decision spaces, the SPT provides the entire decision region for the original problem. Thereby, the original complex problem can be divided into smaller problems with a relatively balanced and regular distribution. Moreover, the designed partition strategy offers advantages for the recognition of minority samples in the decision space. Finally, the space partition and merging exhibit superior geometric intuition and property of diversity. By introducing the biased penalties Support Vector Machine (BPSVM) into the SPT, the SPT-BPSVM demonstrates satisfactory performance and validates the effectiveness of the SPT in the experiment.A simple PID-based strategy for particle swarm optimization algorithm.https://www.zbmath.org/1453.902172021-02-27T13:50:00+00:00"Xiang, Zhenglong"https://www.zbmath.org/authors/?q=ai:xiang.zhenglong"Ji, Daomin"https://www.zbmath.org/authors/?q=ai:ji.daomin"Zhang, Heng"https://www.zbmath.org/authors/?q=ai:zhang.heng"Wu, Hongrun"https://www.zbmath.org/authors/?q=ai:wu.hongrun"Li, Yuanxiang"https://www.zbmath.org/authors/?q=ai:li.yuanxiangSummary: The particle swarm optimization (PSO) algorithm, which updates particles by considering their past momentum and current direction, has demonstrated its power in several optimization applications. However, the updating strategy followed by the standard PSO mainly aims to learn from the global optimum, which often leads to PSO suffering from premature convergence. Using the past momentum can result in the overshoot problem, which usually slows down convergence in complex optimization problems. Inspired by the massive success of the proportional-integral-derivative (PID) controller in automatic control, we first establish a connection between the PSO process and the PID controller-based control system. Thereafter, we propose a PID-based strategy for PSO (PBS-PSO) to accelerate convergence and adjust the search direction to get out of local optima. The proposed PBS-PSO utilizes the past, current, and change in global best together to update the search direction. We conduct experiments on the CEC2013 test suite benchmark. The experimental results demonstrate the effectiveness of our proposed PBS-PSO algorithm. Most importantly, we find that the proposed PID-controller-based strategy has good generalization ability because it can be combined with other PSO variants to improve convergence performance in most cases.Multiple ellipse fitting of densely connected contours.https://www.zbmath.org/1453.681682021-02-27T13:50:00+00:00"Li, Hui"https://www.zbmath.org/authors/?q=ai:li.hui.3|li.hui|li.hui.1|li.hui.2|li.hui.4|li.hui.5Summary: Multiple ellipse fitting is challenging and at the same time essential as it has a variety of applications in biology, chemistry, and nanotechnology. Accurate, effective, and reliable approach for the fitting problem has been always desirable. In this paper, we address a category of multiple ellipse fitting problem which fits densely connected contours. We propose a framework rather than design an algorithm for the problem. The framework streamlines five processes which include: sorting the contour points, doing ellipse fitting in sliding windows, detecting the context anomaly, performing clustering, and obtaining multiple ellipses through second ellipse fitting. The framework is evaluated in a real-world application of handprint identification and various synthetic datasets. Experimental results show that the framework can extract multiple ellipses from contours with satisfactory accuracy and efficiency.Constrained three-way approximations of fuzzy sets: from the perspective of minimal distance.https://www.zbmath.org/1453.681902021-02-27T13:50:00+00:00"Zhou, Jie"https://www.zbmath.org/authors/?q=ai:zhou.jie.4"Miao, Duoqian"https://www.zbmath.org/authors/?q=ai:miao.duoqian"Gao, Can"https://www.zbmath.org/authors/?q=ai:gao.can"Lai, Zhihui"https://www.zbmath.org/authors/?q=ai:lai.zhihui"Yue, Xiaodong"https://www.zbmath.org/authors/?q=ai:yue.xiaodongSummary: Three-way approximations of fuzzy sets aim at abstracting fuzzy sets into three pair-wise disjoint categories which facilitate semantic-oriented interpretations and reduce computing burden. Shadowed sets are a schema of three-way approximations of fuzzy sets which are formed based on a specific optimization mechanism. Among different principles guiding the construction of shadowed sets, the criterion of minimum distance offers a new insight within the framework of three-way decision theory. In this paper, the essential mathematical properties of the objective function used as a criterion to construct three-way approximations of fuzzy sets based on the principle of minimal distance, as well the characteristics of the optimal solutions, are analyzed. It is demonstrated that this optimization objective function is continuous but nonconvex with respect to the optimized variables. The nonconvex property makes the solution difficult and different approximate region partitions are obtainable even under the same optimization model. Therefore, further criteria are required to select final partition thresholds and make the construction process well-defined. To address this limitation, the notion of constrained three-way approximations of fuzzy sets is proposed from the perspective of minimal distance. Moreover, a constructive algorithm is provided to obtain the proposed constrained three-way approximations rather than using a direct enumeration method, and its performance is illustrated by considering some typical fuzzy sets along with some data from UCI repository.Locally differentially private item-based collaborative filtering.https://www.zbmath.org/1453.680712021-02-27T13:50:00+00:00"Guo, Taolin"https://www.zbmath.org/authors/?q=ai:guo.taolin"Luo, Junzhou"https://www.zbmath.org/authors/?q=ai:luo.junzhou"Dong, Kai"https://www.zbmath.org/authors/?q=ai:dong.kai"Yang, Ming"https://www.zbmath.org/authors/?q=ai:yang.ming.2|yang.ming.1|yang.mingSummary: Recently, item-based collaborative filtering has attracted a lot of attention. It recommends to users new items which may be of interests to them, based on their reported historical data (i.e., the items they have already been interested in). The reported historical data leads to significant privacy risks in case that the recommending service is not fully trusted. Many researches have focused on developing differential privacy mechanisms to protect personal data in various recommendations. However, most of these mechanisms can not ensure accuracy of the recommendations. The main reason for this problem is that these methods compute similarity directly from the perturbation data. The computed similarity is thus always inaccurate and this inaccurate similarity finally leads to inaccurate recommendation results. In this paper, we propose a locally differentially private item-based collaborative filtering framework, which protects users' private historical data on the user-side, and on the server-side reconstructs the similarity to ensure recommendation accuracy. The similarities are reconstructed for every pair of items, by estimating the number of users who have rated neither, either one, or both of them. The final recommendation is generated by the reconstructed similarities. Experimental results show that our proposed method significantly outperforms the state-of-the-art methods in terms of the recommendation accuracy and the trade-off between privacy and accuracy.A feasible-ratio control technique for constrained optimization.https://www.zbmath.org/1453.901652021-02-27T13:50:00+00:00"Jiao, Ruwang"https://www.zbmath.org/authors/?q=ai:jiao.ruwang"Zeng, Sanyou"https://www.zbmath.org/authors/?q=ai:zeng.sanyou"Li, Changhe"https://www.zbmath.org/authors/?q=ai:li.changheSummary: In constrained optimization problems (COPs), a crucial issue is that most constraint-handling evolutionary algorithms (EAs) approach the optimum either mainly from feasible regions or mainly from infeasible regions. This may result in bias in search of feasible and infeasible solutions. To address this issue, we propose a feasible-ratio control technique which controls the ratio of feasible solutions in the population. By using the control technique, an EA can maintain the search balance from feasible and infeasible regions. Based on this technique, we propose a constraint-handling EA, named FRC-CEA. It consists of two-stage optimization. In the first stage, an enhanced dynamic multi-objective evolutionary algorithm (DCMOEA) with the feasible-ratio control technique is adopted to handle constraints. In the second stage, a commonly used differential evolution (DE) is used to speed up the convergence. The performance of the proposed method is evaluated and compared with six state-of-the-art constraint-handling algorithms on two sets of benchmark test suites. Experimental results suggest that the proposed method outperforms or is highly competitive against the compared algorithms on most test problems.Functional broadcast encryption with applications to data sharing for cloud storage.https://www.zbmath.org/1453.680692021-02-27T13:50:00+00:00"Wang, Huige"https://www.zbmath.org/authors/?q=ai:wang.huige"Zhang, Yuan"https://www.zbmath.org/authors/?q=ai:zhang.yuan"Chen, Kefei"https://www.zbmath.org/authors/?q=ai:chen.kefei"Sui, Guangye"https://www.zbmath.org/authors/?q=ai:sui.guangye"Zhao, Yunlei"https://www.zbmath.org/authors/?q=ai:zhao.yunlei"Huang, Xinyi"https://www.zbmath.org/authors/?q=ai:huang.xinyiSummary: Cloud storage services provide data owners an efficient and flexible way to share data. Among the shared data, some of them are very sensitive, and should be prevented for any leakage. Should users conventionally encrypt the data, however, flexibly sharing is lost. Public-key encryption with access control (PEAC) resolves this tension. Most of existing PEAC schemes only support the data owner to control either the parts of data to be accessed by other users (file-based PEAC), or the membership of users that access the entire data set (receiver-based PEAC). However, in reality a PEAC scheme with both file-based and receiver-based functionalities is required to ensure the efficiency, flexibility, and fine-grainess of the data sharing service. In this paper, we introduce a primitive of functional broadcast encryption (FBE). FBE is a manifestation of PEAC that enables a data owner to share a set of data files to a group of users, where only a specific subset of data files can be accessed and decrypted by a specific subgroup of users. We describe a construction for FBE based on indistinguishability obfuscation \((i \mathcal{O})\). Security analysis demonstrates that the proposed scheme achieves selective IND-CCA security, and a comprehensive performance analysis shows the proposed scheme is efficient.Feature selection using neighborhood entropy-based uncertainty measures for gene expression data classification.https://www.zbmath.org/1453.681862021-02-27T13:50:00+00:00"Sun, Lin"https://www.zbmath.org/authors/?q=ai:sun.lin"Zhang, Xiaoyu"https://www.zbmath.org/authors/?q=ai:zhang.xiaoyu"Qian, Yuhua"https://www.zbmath.org/authors/?q=ai:qian.yuhua"Xu, Jiucheng"https://www.zbmath.org/authors/?q=ai:xu.jiucheng"Zhang, Shiguang"https://www.zbmath.org/authors/?q=ai:zhang.shiguangSummary: Gene expression data classification is an important technology for cancer diagnosis in bioinformatics and has been widely researched. Due to the large number of genes and the small sample size in gene expression data, feature selection based on neighborhood rough sets is a key step for improving the performance of gene expression data classification. However, some quantitative measures of feature sets may be nonmonotonic in neighborhood rough sets, and many feature selection methods based on evaluation functions yield high cardinality and low predictive accuracy. Therefore, investigating effective and efficient heuristic reduction algorithms is necessary. In this paper, a novel feature selection method based on neighborhood rough sets using neighborhood entropy-based uncertainty measures for cancer classification from gene expression data is proposed. First, some neighborhood entropy-based uncertainty measures are investigated for handling the uncertainty and noise of neighborhood decision systems. Then, to fully reflect the decision-making ability of attributes, the neighborhood credibility and neighborhood coverage degrees are defined and introduced into decision neighborhood entropy and mutual information, which are proven to be nonmonotonic. Moreover, some of the properties and relationships among these measures are derived, which is helpful for understanding the essence of the knowledge content and the uncertainty of neighborhood decision systems. Finally, the Fisher score method is employed to preliminarily eliminate irrelevant genes to significantly reduce complexity, and a heuristic feature selection algorithm with low computational complexity is presented to improve the performance of cancer classification using gene expression data. Experiments on ten gene expression datasets show that our proposed algorithm is indeed efficient and outperforms other related methods in terms of the number of selected genes and the classification accuracy, especially as the size of the genes increases.Uncertain database retrieval with measure-based belief function attribute values.https://www.zbmath.org/1453.680672021-02-27T13:50:00+00:00"Yager, Ronald R."https://www.zbmath.org/authors/?q=ai:yager.ronald-r"Alajlan, Naif"https://www.zbmath.org/authors/?q=ai:alajlan.naif"Bazi, Yakoub"https://www.zbmath.org/authors/?q=ai:bazi.yakoubSummary: We discuss how the Dempster-Shafer belief structure provides a framework for modeling an uncertain value \(\widetilde{x}\) from some domain \(X\). We note how it involves a two-step process: the random determination of one focal element (set) guided by a probability distribution and then the selection of \(\widetilde{x}\) from this focal element in some unspecified manner. We generalize this framework by allowing the selection of the focal element to be determined by a random experiment guided by a fuzzy measure. In either case the anticipation that \(\widetilde{x}\) lies in some subset \(E\) is interval-valued, [Bel(E), Pl(E)]. We next look at database retrieval and turn to issue of determining if a database entity with an uncertain attribute value satisfies a desired value. Here we model our uncertain attribute value as \(\widetilde{x}\) and our desired value as a subset \(E\). In this case the degree of satisfaction of the query \(E\) by the entity is [Bel(E), Pl(E)]. In order to compare these interval-valued satisfactions we use the Golden rule representative value to turn the intervals into scalars. We describe an application involving retrieval from a uncertain database.Robust \(\ell_2\)-hypergraph and its applications.https://www.zbmath.org/1453.681512021-02-27T13:50:00+00:00"Jin, Taisong"https://www.zbmath.org/authors/?q=ai:jin.taisong"Yu, Zhengtao"https://www.zbmath.org/authors/?q=ai:yu.zhengtao"Gao, Yue"https://www.zbmath.org/authors/?q=ai:gao.yue"Gao, Shengxiang"https://www.zbmath.org/authors/?q=ai:gao.shengxiang"Sun, Xiaoshuai"https://www.zbmath.org/authors/?q=ai:sun.xiaoshuai"Li, Cuihua"https://www.zbmath.org/authors/?q=ai:li.cuihuaSummary: Hypergraph, an important learning tool to modulate high-order data correlations, has a wide range of applications in machine learning and computer vision. The key issue of the hypergraph-based applications is to construct an informative hypergraph, in which the hyperedges effectively represent the high-order data correlations. In practice, the real-world data is usually sampled from a union of non-linear manifolds. Due to the issues of noise and data corruptions, many data samples deviate from the underlying data manifolds. To construct an informative hypergraph that represents real-world data distribution well, we propose a hypergraph model (\(\ell_2\)-Hypergraph). Our model generates each hyperedge by solving an affine subspace ridge regression problem, where the samples with non-zero representation coefficients are used for hyperege generation. Specifically, to be robust to sparse noise and corruptions, a sparse constraint is imposed on data errors. We have conducted image clustering and classification experiments on real-world datasets. The experimental results demonstrate that our hypergraph model is superior to the existing hypergraph construction methods in both accuracy and robustness to sparse noise.On the degeneracy of the Randić entropy and related graph measures.https://www.zbmath.org/1453.681322021-02-27T13:50:00+00:00"Dehmer, Matthias"https://www.zbmath.org/authors/?q=ai:dehmer.matthias"Chen, Zengqiang"https://www.zbmath.org/authors/?q=ai:chen.zengqiang"Mowshowitz, Abbe"https://www.zbmath.org/authors/?q=ai:mowshowitz.abbe"Jodlbauer, Herbert"https://www.zbmath.org/authors/?q=ai:jodlbauer.herbert"Emmert-Streib, Frank"https://www.zbmath.org/authors/?q=ai:emmert-streib.frank"Shi, Yongtang"https://www.zbmath.org/authors/?q=ai:shi.yongtang"Tripathi, Shailesh"https://www.zbmath.org/authors/?q=ai:tripathi.shailesh"Xia, Chengyi"https://www.zbmath.org/authors/?q=ai:xia.chengyiSummary: Numerous quantitative graph measures have been defined and applied in various disciplines. Such measures may be differentiated according to whether they are information-theoretic or non-information-theoretic. In this paper, we examine an important property of Randić entropy, an information-theoretic measure, and examine some related graph measures based on random roots. In particular, we investigate the degeneracy of these structural graph measures and discuss numerical results. Finally, we draw some conclusions about the measures' applicability to deterministic and non-deterministic networks.Supervisory control of a class of Petri nets with unobservable and uncontrollable transitions.https://www.zbmath.org/1453.931462021-02-27T13:50:00+00:00"You, Dan"https://www.zbmath.org/authors/?q=ai:you.dan"Wang, ShouGuang"https://www.zbmath.org/authors/?q=ai:wang.shouguang"Seatzu, Carla"https://www.zbmath.org/authors/?q=ai:seatzu.carlaSummary: This paper uses Petri nets (PNs) as a modeling tool to deal with the forbidden state problem of discrete event systems (DESs) in the presence of both unobservable and uncontrollable events. First of all, it is proved that two state specifications are equivalent if their admissible marking sets coincide. Motivated by this result, we focus on studying how to compute optimal policies with respect to a state specification that is an admissible linear constraint. Thanks to many approaches in the literature that allow one to efficiently transform an arbitrary linear constraint into an admissible one with the admissible marking set unchanged, the proposed result remains useful in the more general case of arbitrary linear constraints. Specifically, focusing on ordinary PNs subject to an admissible linear constraint, we propose an optimal control policy whose computation mainly lies in the computation of the \textit{unobservable minimal decrease}, a parameter depending on the current observation and the given constraint. A procedure to compute such a parameter with polynomial complexity is proposed provided that a particular subnet, called observation subnet, is acyclic and backward-conflict and backward-concurrent free (BBF). As a result, under such assumptions, the optimal control policy could be computed with polynomial complexity.A polynomial time subsumption algorithm for nominal safe \(\mathcal{ELO}_\bot\) under rational closure.https://www.zbmath.org/1453.681772021-02-27T13:50:00+00:00"Casini, Giovanni"https://www.zbmath.org/authors/?q=ai:casini.giovanni"Straccia, Umberto"https://www.zbmath.org/authors/?q=ai:straccia.umberto"Meyer, Thomas"https://www.zbmath.org/authors/?q=ai:meyer.thomas-andreasSummary: Description Logics (DLs) under Rational Closure (RC) is a well-known framework for non-monotonic reasoning in DLs. In this paper, we address the concept subsumption decision problem under RC for nominal safe \(\mathcal{ELO}_\bot\), a notable and practically important DL representative of the OWL 2 profile OWL 2 EL. Our contribution here is to define a polynomial time subsumption procedure for nominal safe \(\mathcal{ELO}_\bot\) under RC that relies entirely on a series of classical, monotonic \(\mathcal{EL}_\bot\) subsumption tests. Therefore, any existing classical monotonic \(\mathcal{EL}_\bot\) reasoner can be used as a black box to implement our method. We then also adapt the method to one of the known extensions of RC for DLs, namely Defeasible Inheritance-based DLs without losing the computational tractability.Cost-sensitive active learning through statistical methods.https://www.zbmath.org/1453.681602021-02-27T13:50:00+00:00"Wang, Min"https://www.zbmath.org/authors/?q=ai:wang.min.2|wang.min|wang.min.1"Lin, Yao"https://www.zbmath.org/authors/?q=ai:lin.yao"Min, Fan"https://www.zbmath.org/authors/?q=ai:min.fan"Liu, Dun"https://www.zbmath.org/authors/?q=ai:liu.dunSummary: Clustering-based active learning splits the data into a number of blocks and queries the labels of most the representative instances. When the cost of labeling and misclassification are considered, we also face a key issue: How many labels should be queried for a given block. In this paper, we present theoretical and practical statistical methods to handle this issue. The theoretical statistical method calculates the optimal number of query labels for a predefined label distribution. Considering label distributions for different clustering qualities, we obtain three hypothetical models, namely Gaussian, Uniform, and V models. The practical statistical method calculates empirical label distribution of the cluster blocks. Considering four popular clustering algorithms, we use symmetry and curve fitting techniques on 30 datasets to obtain empirical distributions. Inspired by three-way decision, we design an algorithm called the cost-sensitive active learning through statistical methods (CATS). Experiments were performed on 12 binary-class datasets for both the distribution evaluation and learning task. The results of significance tests verify the effectiveness of CATS and its superior performance with respect to state-of-the-art cost-sensitive active learning algorithms.Decomposing SAT instances with pseudo backbones.https://www.zbmath.org/1453.681702021-02-27T13:50:00+00:00"Chen, Wenxiang"https://www.zbmath.org/authors/?q=ai:chen.wenxiang"Whitley, Darrell"https://www.zbmath.org/authors/?q=ai:whitley.darrellSummary: Two major search paradigms have been proposed for SAT solving: Systematic Search (SS) and Stochastic Local Search (SLS). In SAT competitions, while SLS solvers are effective on uniform random instances, SS solvers dominate SLS solvers on application instances with internal structures. One important structural property is decomposability. SS solvers have long been exploited the decomposability of application instances with success. We conjecture that SLS solvers can be improved by exploiting decomposability of application instances, and propose the first step toward exploiting decomposability with SLS solvers using pseudo backbones. We then propose two SAT-specific optimizations that lead to better decomposition than on general pseudo Boolean optimization problems. Our empirical study suggests that pseudo backbones can vastly simplify SAT instances, which further results in decomposing the instances into thousands of connected components. This decomposition serves as a key stepping stone for applying the powerful recombination operator, partition crossover, to the SAT domain. Moreover, we establish a priori analysis for identifying problem instances with potential decomposability using visualization of MAXSAT instances and treewidth.
For the entire collection see [Zbl 1360.68014].Optimising web service composition based on differential fruit fly optimisation algorithm.https://www.zbmath.org/1453.680222021-02-27T13:50:00+00:00"Zhang, Yi-wen"https://www.zbmath.org/authors/?q=ai:zhang.yiwen"Wu, Jin-tao"https://www.zbmath.org/authors/?q=ai:wu.jintao"Guo, Xing"https://www.zbmath.org/authors/?q=ai:guo.xing"Lin, Guan-nan"https://www.zbmath.org/authors/?q=ai:lin.guan-nanSummary: With the emergence of a large number of web services with similar functional but different non-functional attributes, how to select appropriate web services from massive candidate services and assemble them into service composition which can complete the complex value-added business process is of great concern. The problem of QoS-aware service composition belongs to the multi-objective decision optimisation problem. Its goal is to optimise QoS of the entire service composition. To solve this problem, this paper presents a differential fruit fly optimisation algorithm (DFOA). Firstly, make fast global optimisation through FOA. Then, perform the local search in the neighbourhoods of global optimisation solution based on fruit fly swarm differential mutation and crossover operation. Finally, update the global optimal solution according to the two optimisation results, thereby obtaining the optimal service composition. The experiment verifies the feasibility of the algorithm.A hybrid feature selection algorithm based on large neighborhood search.https://www.zbmath.org/1453.681752021-02-27T13:50:00+00:00"Taghizadeh, Gelareh"https://www.zbmath.org/authors/?q=ai:taghizadeh.gelareh"Musliu, Nysret"https://www.zbmath.org/authors/?q=ai:musliu.nysretSummary: Feature selection aims at choosing a small number of relevant features in a data set to achieve similar or even better classification accuracy than using all features. This paper presents the first study on Large Neighborhood Search (LNS) algorithm for the feature selection problem. We propose a novel hybrid Wrapper and Filter feature selection method using LNS algorithm (WFLNS). In LNS, an initial solution is gradually improved by alternately destroying and repairing the solution. We introduce the idea of using filter ranking method in the process of destroying and repairing to accelerate the search in identifying the core feature subsets. Particularly, WFLNS either adds or removes features from a candidate solution based on the correlation based feature ranking method. The proposed algorithm has been tested on twelve benchmark data sets and the results have been compared with ten most recent wrapper methods where WFLNS outperforms other methods in several the data sets.
For the entire collection see [Zbl 1360.68014].A new resource allocation strategy based on the relationship between subproblems for MOEA/D.https://www.zbmath.org/1453.901562021-02-27T13:50:00+00:00"Wang, Peng"https://www.zbmath.org/authors/?q=ai:wang.peng|wang.peng.2|wang.peng.1"Zhu, Wen"https://www.zbmath.org/authors/?q=ai:zhu.wen"Liu, Haihua"https://www.zbmath.org/authors/?q=ai:liu.haihua"Liao, Bo"https://www.zbmath.org/authors/?q=ai:liao.bo"Cai, Lijun"https://www.zbmath.org/authors/?q=ai:cai.lijun"Wei, Xiaohui"https://www.zbmath.org/authors/?q=ai:wei.xiaohui"Ren, Siqi"https://www.zbmath.org/authors/?q=ai:ren.siqi"Yang, Jialiang"https://www.zbmath.org/authors/?q=ai:yang.jialiangSummary: Multi-objective evolutionary algorithms based on decomposition (MOEA/D) decomposes a multi-objective optimization problem (MOP) into a set of simple scalar objective optimization sub-problems and solves them in a collaborative way. Since the sub-problems are different in optimization difficulty and computational resource demanding, it is critical to reasonably allocate computational resources among them, which can optimize the usage of resources and improve the performance of an algorithm. This paper proposes a new resource allocation strategy based on the relationship between sub-problems for MOEA/D. A probability vector is maintained based on the relationship between sub-problems, which is used to guide the selection of sub-problems for optimization. In the optimization process, we explored the role of priority optimization of boundary sub-problems and used it to assist in the update of probability vector in the early optimization phase. A steady-state algorithm is designed and tested experimentally. The results suggest that the designed algorithms have some advantages over existing state-of-the-art algorithms.ARES: adaptive receding-horizon synthesis of optimal plans.https://www.zbmath.org/1453.901812021-02-27T13:50:00+00:00"Lukina, Anna"https://www.zbmath.org/authors/?q=ai:lukina.anna"Esterle, Lukas"https://www.zbmath.org/authors/?q=ai:esterle.lukas"Hirsch, Christian"https://www.zbmath.org/authors/?q=ai:hirsch.christian"Bartocci, Ezio"https://www.zbmath.org/authors/?q=ai:bartocci.ezio"Yang, Junxing"https://www.zbmath.org/authors/?q=ai:yang.junxing"Tiwari, Ashish"https://www.zbmath.org/authors/?q=ai:tiwari.ashish-kumar"Smolka, Scott A."https://www.zbmath.org/authors/?q=ai:smolka.scott-a"Grosu, Radu"https://www.zbmath.org/authors/?q=ai:grosu.raduSummary: We introduce ARES, an efficient approximation algorithm for generating optimal plans (action sequences) that take an initial state of a Markov Decision Process (MDP) to a state whose cost is below a specified (convergence) threshold. ARES uses Particle Swarm Optimization, with adaptive sizing for both the receding horizon and the particle swarm. Inspired by Importance Splitting, the length of the horizon and the number of particles are chosen such that at least one particle reaches a next-level state, that is, a state where the cost decreases by a required delta from the previous-level state. The level relation on states and the plans constructed by ARES implicitly define a Lyapunov function and an optimal policy, respectively, both of which could be explicitly generated by applying ARES to all states of the MDP, up to some topological equivalence relation. We also assess the effectiveness of ARES by statistically evaluating its rate of success in generating optimal plans. The ARES algorithm resulted from our desire to clarify if flying in V-formation is a flocking policy that optimizes energy conservation, clear view, and velocity alignment. That is, we were interested to see if one could find optimal plans that bring a flock from an arbitrary initial state to a state exhibiting a single connected V-formation. For flocks with 7 birds, ARES is able to generate a plan that leads to a V-formation in 95\% of the 8,000 random initial configurations within 63 s, on average. ARES can also be easily customized into a model-predictive controller (MPC) with an adaptive receding horizon and statistical guarantees of convergence. To the best of our knowledge, our adaptive-sizing approach is the first to provide convergence guarantees in receding-horizon techniques.
For the entire collection see [Zbl 1360.68016].Multi-objective evolutionary algorithm with prediction in the objective space.https://www.zbmath.org/1453.901422021-02-27T13:50:00+00:00"Guerrero-Peña, Elaine"https://www.zbmath.org/authors/?q=ai:guerrero-pena.elaine"Araújo, Aluízio Fausto Ribeiro"https://www.zbmath.org/authors/?q=ai:araujo.aluizio-fausto-ribeiroSummary: Multi-objective optimization problem resolution using Evolutionary Algorithms (EAs) has not yet been completely addressed. Issues such as the population diversity loss and the EA sensitivity to the Pareto front shape affect the algorithm performance. Various EAs include knowledge acquisition for the evolutionary process to deal with such problems. Several issues are crucial for the addition of knowledge using a probabilistic model; among these, we mention the time at which the algorithm should update the model, and which information is suitable for constructing the model. To handle these issues, we propose the Non-dominated Sorting Differential Evolution improvement with Prediction in the Objective Space (OSP-NSDE). When several premises based on the Approximated Hypervolume metric are achieved, the OSP-NSDE triggers the Objective Space Prediction (OSP) strategy. The OSP identifies trends in the movements of non-dominated individuals in the objective space, and then rapidly determines promising regions in the search space and generates a new population considering such regions. Regular variation operators are used to produce the offspring whenever the OSP condition is not satisfied. The OSP-NSDE effectiveness was verified using 31 well-known functions and three real-world problems, and compared with EA-based algorithms and others with collective intelligence.Optimizing regression models for data streams with missing values.https://www.zbmath.org/1453.682212021-02-27T13:50:00+00:00"Žliobaitė, Indrė"https://www.zbmath.org/authors/?q=ai:zliobaite.indre"Hollmén, Jaakko"https://www.zbmath.org/authors/?q=ai:hollmen.jaakkoSummary: Automated data acquisition systems, such as wireless sensor networks, surveillance systems, or any system that records data in operating logs, are becoming increasingly common, and provide opportunities for making decision on data in real or nearly real time. In these systems, data is generated continuously resulting in a stream of data, and predictive models need to be built and updated online with the incoming data. In addition, the predictive models need to be able to output predictions continuously, and without delays. Automated data acquisition systems are prone to occasional failures. As a result, missing values may often occur. Nevertheless, predictions need to be made continuously. Hence, predictive models need to have mechanisms for dealing with missing data in such a way that the loss in accuracy due to occasionally missing values would be minimal. In this paper, we theoretically analyze effects of missing values to the accuracy of linear predictive models. We derive the optimal least squares solution that minimizes the expected mean squared error given an expected rate of missing values. Based on this theoretically optimal solution we propose a recursive algorithm for producing and updating linear regression online, without accessing historical data. Our experimental evaluation on eight benchmark datasets and a case study in environmental monitoring with streaming data validate the theoretical results and confirm the effectiveness of the proposed strategy.A decomposition based evolutionary algorithm with direction vector adaption and selection enhancement.https://www.zbmath.org/1453.901602021-02-27T13:50:00+00:00"Zhou, Jiajun"https://www.zbmath.org/authors/?q=ai:zhou.jiajun"Yao, Xifan"https://www.zbmath.org/authors/?q=ai:yao.xifan"Chan, Felix T. S."https://www.zbmath.org/authors/?q=ai:chan.felix-t-s"Gao, Liang"https://www.zbmath.org/authors/?q=ai:gao.liang"Jing, Xuan"https://www.zbmath.org/authors/?q=ai:jing.xuan"Li, Xinyu"https://www.zbmath.org/authors/?q=ai:li.xinyu"Lin, Yingzi"https://www.zbmath.org/authors/?q=ai:lin.yingzi"Li, Yun"https://www.zbmath.org/authors/?q=ai:li.yunSummary: To further improve the search ability of the decomposition based many/multi-objective evolutionary algorithm (MOEA/D) in the tackling many-objective optimization problems (MaOPs) possessing complex characteristics (e.g., disconnected, degenerate, inverted, extremely convex or differently-scaled), we suggest an adaptive MOEA/D with better versatility, where the weight vector adaption and selection mechanism are improved. Firstly, a new niche-guided scheme by considering both the vector angle and Euclidean distance is proposed to leverage the search direction adaption upon different evolution phases, which is expected to be more robust for handling different types of irregular Pareto fronts (PFs). Secondly, in mating selection, a coordinated selection scheme aided by a multi-criterion decision procedure is utilized to enhance the effectiveness of recombination. Finally, in environmental selection, a steady state replacement strategy considering both the ensemble ranking of favorite subproblems with respect to solutions and improvement region restriction of subproblems is employed to alleviate misleading selection. Comparison experiments on benchmark MaOPs with diverse characteristics have been performed and the empirical results demonstrate the superiority of our proposal. The effects of direction vector adaption mechanism and other pertinent enhancements are also investigated.Pointwise directional increasingness and geometric interpretation of directionally monotone functions.https://www.zbmath.org/1453.681852021-02-27T13:50:00+00:00"Sesma-Sara, Mikel"https://www.zbmath.org/authors/?q=ai:sesma-sara.mikel"De Miguel, Laura"https://www.zbmath.org/authors/?q=ai:de-miguel.laura"Roldán López de Hierro, Antonio Francisco"https://www.zbmath.org/authors/?q=ai:roldan-lopez-de-hierro.antonio-francisco"Lafuente, Julio"https://www.zbmath.org/authors/?q=ai:lafuente.julio-p"Mesiar, Radko"https://www.zbmath.org/authors/?q=ai:mesiar.radko"Bustince, Humberto"https://www.zbmath.org/authors/?q=ai:bustince.humbertoSummary: The relaxation of monotonicity requirements is a trend in the theory of aggregation functions. In the recent literature, we can find several relaxed forms of monotonicity, such as weak, directional, cone, ordered directional and strengthened directional monotonicity. All these forms of monotonicity are global properties in the sense that they are imposed for all the points in the domain of a function. In this work, we introduce a local notion of monotonicity called pointwise directional monotonicity, or directional monotonicity at a point. Based on this concept, we characterize all the previously defined notions of monotonicity and, in the final part of the paper, we present some geometric aspects of the global weaker forms of monotonicity, stressing their relations and singularities.Weight selection strategies for ordered weighted average based fuzzy rough sets.https://www.zbmath.org/1453.681872021-02-27T13:50:00+00:00"Vluymans, Sarah"https://www.zbmath.org/authors/?q=ai:vluymans.sarah"Mac Parthaláin, Neil"https://www.zbmath.org/authors/?q=ai:parthalain.neil-mac"Cornelis, Chris"https://www.zbmath.org/authors/?q=ai:cornelis.chris"Saeys, Yvan"https://www.zbmath.org/authors/?q=ai:saeys.yvanSummary: Fuzzy rough set theory models both vagueness and indiscernibility in data, which makes it a very useful tool for application to various machine learning tasks. In this paper, we focus on one of its robust generalisations, namely ordered weighted average based fuzzy rough sets. This model uses a weighted approach in the definition of the fuzzy rough operators. Although its efficacy and competitiveness with state-of-the-art machine learning approaches has been well established in several studies, its main drawback is the difficulty in choosing an appropriate weighting scheme. Several options exist and an adequate choice can greatly enhance the suitability of the ordered weighted average based fuzzy rough operators. In this work, we develop a clear strategy for the weighting scheme selection based upon the underlying characteristics of the data. The advantages of the approach are presented in a detailed experimental study focusing. Rather than to propose a classifier, our aim is to present a strategy to select a suitable weighting scheme for ordered weighted average based fuzzy rough sets in general. Our weighting scheme selection process allows users to take full advantage of the versatility offered by this model and performance improvements over the traditional fuzzy rough set approaches.Min-max attribute-object bireducts: on unifying models of reducts in rough set theory.https://www.zbmath.org/1453.681832021-02-27T13:50:00+00:00"Ma, Xi-Ao"https://www.zbmath.org/authors/?q=ai:ma.xiao"Yao, Yiyu"https://www.zbmath.org/authors/?q=ai:yao.yiyuSummary: A decision table describes a finite set of objects OB by using a finite set of condition attributes \(C\) and a finite set of decision attributes \(D\). Pawlak defines attribute reducts by considering the entire decision table. As a generalization, we introduce the notion of min-max attribute-object bireducts of a sub-table restricted by a pair \((B, X)\) of a subset of condition attributes \(B\) and a subset of objects \(X\). A pair \((R, Z)\) is a min-max attribute-object bireduct of \((B, X)\) if and only if \(R\) is a minimal subset of \(B\) such that \(R\) and \(B\) make the same correct decisions for objects in \(X\) and \(Z\) is a maximal subset of \(X\) for which \(B\) can make the correct decisions. We propose the notion of the decidability of objects and introduce the decidable region of a set of objects as a generalization of the positive region of the set. We define and interpret a min-max attribute-object bireduct based on the decidable region. Min-max attribute-object bireducts offer a general model and existing models of attribute reducts are special cases. The results lead to a unified framework for studying four types of attribute reducts.On solutions and representations of spiking neural P systems with rules on synapses.https://www.zbmath.org/1453.680772021-02-27T13:50:00+00:00"Cabarle, Francis George C."https://www.zbmath.org/authors/?q=ai:cabarle.francis-george-c"de la Cruz, Ren Tristan A."https://www.zbmath.org/authors/?q=ai:de-la-cruz.ren-tristan-a"Cailipan, Dionne Peter P."https://www.zbmath.org/authors/?q=ai:cailipan.dionne-peter-p"Zhang, Defu"https://www.zbmath.org/authors/?q=ai:zhang.defu"Liu, Xiangrong"https://www.zbmath.org/authors/?q=ai:liu.xiangrong"Zeng, Xiangxiang"https://www.zbmath.org/authors/?q=ai:zeng.xiangxiangSummary: Spiking neural P systems, or SN P systems, are parallel and nondeterministic computing models inspired by spike processing of neurons. A variant of SN P systems known as SN P systems with rules on synapses, or RSSN P systems, makes use of the neuroscience idea where synapses or links between neurons perform spike processing instead of neurons. The spike processing in synapses instead of in neurons can allow RSSN P systems to have a smaller complexity due to their richer semantics, as compared to SN P systems. In this work we are first to provide the following: definitions of complexity classes of problems solved by RSSN P systems, depending if the problem has a uniform or nonuniform type of solution; both types of solutions to the \textbf{NP}-complete problem \texttt{Subset} \texttt{sum}; matrix representation and simulation algorithm for RSSN P systems. Such representation and algorithm can aid in practical use of RSSN P systems. We also provide small computer simulations based on our representation and algorithm. Our simulations show that the nonuniform and uniform solutions to \texttt{Subset} \texttt{sum} are better suited in the sequential CPU and the parallel GPU computer, respectively. Lastly, we remark several directions for investigations of RSSN P systems.Moderate deviation and restricted equivalence functions for measuring similarity between data.https://www.zbmath.org/1453.681802021-02-27T13:50:00+00:00"Altalhi, A. H."https://www.zbmath.org/authors/?q=ai:altalhi.abdulrahman-helal"Forcén, J. I."https://www.zbmath.org/authors/?q=ai:forcen.j-i"Pagola, M."https://www.zbmath.org/authors/?q=ai:pagola.miguel"Barrenechea, E."https://www.zbmath.org/authors/?q=ai:barrenechea.edurne"Bustince, H."https://www.zbmath.org/authors/?q=ai:bustince.humberto"Takáč, Zdenko"https://www.zbmath.org/authors/?q=ai:takac.zdenkoSummary: In this work we study the relation between moderate deviation functions, restricted dissimilarity functions and restricted equivalence functions. We use moderate deviation functions in order to measure the similarity or dissimilarity between a given set of data. We show an application of moderate deviate functions used in the same way as penalty functions to make a final decision from a score matrix in a classification problem.U-net generative adversarial network for subsurface facies modeling.https://www.zbmath.org/1453.860472021-02-27T13:50:00+00:00"Zhang, Chengkai"https://www.zbmath.org/authors/?q=ai:zhang.chengkai"Song, Xianzhi"https://www.zbmath.org/authors/?q=ai:song.xianzhi"Azevedo, Leonardo"https://www.zbmath.org/authors/?q=ai:azevedo.leonardoSummary: Subsurface models are central pieces of information in different earth-related disciplines such as groundwater management and hydrocarbon reservoir characterization. These models are normally obtained using geostatistical simulation methods. Recently, methods based on deep learning algorithms have been applied as subsurface model generators. However, there are still challenges on how to include conditioning data and ensure model variability within a set of realizations. We illustrate the potential of Generative Adversarial Networks (GANs) to create unconditional and conditional facies models. Based on a synthetic facies dataset, we first train a Deep Convolution GAN (DCGAN) to produce unconditional facies models. Then, we show how image-to-image translation based on a U-Net GAN framework, including noise-layers, content loss function and diversity loss function, is used to model conditioning geological facies. Results show that GANs are powerful models to capture complex geological facies patterns and to generate facies realizations indistinguishable from the ones comprising the training dataset. The U-Net GAN framework performs well in providing variable models while honoring conditioning data in several scenarios. The results shown herein are expected to spark a new generation of methods for subsurface geological facies with fragmentary measurements.Using machine learning for model benchmarking and forecasting of depletion-induced seismicity in the Groningen gas field.https://www.zbmath.org/1453.860292021-02-27T13:50:00+00:00"Limbeck, Jan"https://www.zbmath.org/authors/?q=ai:limbeck.jan"Bisdom, Kevin"https://www.zbmath.org/authors/?q=ai:bisdom.kevin"Lanz, Fabian"https://www.zbmath.org/authors/?q=ai:lanz.fabian"Park, Timothy"https://www.zbmath.org/authors/?q=ai:park.timothy-a"Barbaro, Eduardo"https://www.zbmath.org/authors/?q=ai:barbaro.eduardo"Bourne, Stephen"https://www.zbmath.org/authors/?q=ai:bourne.stephen"Kiraly, Franz"https://www.zbmath.org/authors/?q=ai:kiraly.franz-j"Bierman, Stijn"https://www.zbmath.org/authors/?q=ai:bierman.stijn-m"Harris, Chris"https://www.zbmath.org/authors/?q=ai:harris.chris-j"Nevenzeel, Keimpe"https://www.zbmath.org/authors/?q=ai:nevenzeel.keimpe"den Bezemer, Taco"https://www.zbmath.org/authors/?q=ai:den-bezemer.taco"van Elk, Jan"https://www.zbmath.org/authors/?q=ai:van-elk.janSummary: The Groningen gas field in the Netherlands is experiencing induced seismicity as a result of ongoing depletion. The physical mechanisms that control seismicity have been studied through rock mechanical experiments and combined physical-statistical models to support development of a framework to forecast induced-seismicity risks. To investigate whether machine learning techniques such as Random Forests and Support Vector Machines bring new insights into forecasts of induced seismicity rates in space and time, a pipeline is designed that extends time-series analysis methods to a spatiotemporal framework with a factorial setup, which allows probing a large parameter space of plausible modelling assumptions, followed by a statistical meta-analysis to account for the intrinsic uncertainties in subsurface data and to ensure statistical significance and robustness of results. The pipeline includes model validation using e.g. likelihood ratio tests against average depletion thickness and strain thickness baselines to establish whether the models have statistically significant forecasting power. The methodology is applied to forecast seismicity for two distinctly different gas production scenarios. Results show that seismicity forecasts generated using Support Vector Machines significantly outperform beforementioned baselines. Forecasts from the method hint at decreasing seismicity rates within the next 5 years, in a conservative production scenario, and no such decrease in a higher depletion scenario, although due to the small effective sample size no statistically solid statement of this kind can be made. The presented approach can be used to make forecasts beyond the investigated 5-years period, although this requires addition of limited physics-based constraints to avoid unphysical forecasts.Prioritising vulnerabilities using ANP and evaluating their optimal discovery and patch release time.https://www.zbmath.org/1453.680572021-02-27T13:50:00+00:00"Kansal, Yogita"https://www.zbmath.org/authors/?q=ai:kansal.yogita"Kapur, P. K."https://www.zbmath.org/authors/?q=ai:kapur.p-k"Kumar, Uday"https://www.zbmath.org/authors/?q=ai:kumar.uday"Kumar, Deepak"https://www.zbmath.org/authors/?q=ai:kumar.deepakSummary: Method for filtering and identifying a vulnerability class that has high probability of occurrence is needed by organisations to patch their software in a timely manner. In this paper, our first step is to filter the most frequently observed vulnerability type/class through a multi-criteria decision making that involves dependency among various criteria and feedback from various alternatives, known as analytic network process. We will also formulate a cost model to provide a solution to the developers facing high revenue debt because of the occurrence of highly exploited vulnerabilities belonging to the filtered group. The main aim of formulating the cost model is to evaluate the optimal discovery and patch release time such that the total developer's cost could be minimised subject to risk constraints. To illustrate the proposed approach, reported vulnerabilities of Google Chrome with high exploitability have been examined at its source level.Independent distributions on a multi-branching AND-OR tree of height 2.https://www.zbmath.org/1453.681732021-02-27T13:50:00+00:00"Shigemizu, Mika"https://www.zbmath.org/authors/?q=ai:shigemizu.mika"Suzuki, Toshio"https://www.zbmath.org/authors/?q=ai:suzuki.toshio"Usami, Koki"https://www.zbmath.org/authors/?q=ai:usami.kokiSummary: We investigate an AND-OR tree \(T\) and a probability distribution \(d\) on the truth assignments to the leaves. \textit{M. Tarsi} [J. Assoc. Comput. Mach. 30, 389--396 (1983; Zbl 0628.68072)] inquired into an independent and identical distribution (IID) \(d\) such that at each leaf, the probability of having value 0 is neither 0 nor 1. Under a reasonable assumption on the shape of a tree, he showed the existence of an optimal algorithm that is depth-first. We investigate the case where \(d\) is an independent distribution (ID) and probability may depend on each leaf. It is known that in this general case, if the height is greater than or equal to 3, Tarsi-type result does not hold. It is also known that for a complete binary tree of height 2, Tarsi-type result certainly holds. In this paper, we ask whether Tarsi-type result holds for an AND-OR tree of height 2. Here, a child node of the root is either an OR-gate or a leaf: The number of child nodes of an internal node is arbitrary, and depends on an internal node. We give an affirmative answer. Our strategy of the proof is to reduce the problem to the case of directional algorithms. We perform induction on the number of leaves, and modify Tarsi's method to suite height 2 trees. We discuss why our proof does not apply to height 3 trees. This research sheds fresh light on the difference of algorithmic nature between shallow trees and deep trees.Modelling and analysis of TCP congestion control mechanisms using stochastic reward nets.https://www.zbmath.org/1453.680232021-02-27T13:50:00+00:00"Younes, Osama S."https://www.zbmath.org/authors/?q=ai:younes.osama-sSummary: Modelling of congestion control mechanisms of the transmission control protocol (TCP) helps to obtain parametric results that help to better understand the TCP behaviour under different operating conditions. Several analytical models were proposed to analyse the behaviour of TCP congestion control mechanisms. However, most of these models were designed for a few TCP sessions with non-persistent connections. A few analytical studies were presented for modelling persistent TCP connections. Nevertheless, these studies have many limiting assumptions and are not scalable. In this work, a stochastic reward nets model is introduced for the TCP Reno with the persistent TCP connections, which share two bottleneck links in a wide area network. A microscopic approach was used for constructing the proposed model that captures most features of congestion control mechanisms used by the TCP Reno, and interactions between different TCP sessions. The proposed model relaxed several limiting assumptions adopted by other related models introduced in the literature. To validate the proposed model, analytical results are extensively compared with simulation results.Efficient and data-driven prediction of water breakthrough in subsurface systems using deep long short-term memory machine learning.https://www.zbmath.org/1453.860032021-02-27T13:50:00+00:00"Bai, Tao"https://www.zbmath.org/authors/?q=ai:bai.tao"Tahmasebi, Pejman"https://www.zbmath.org/authors/?q=ai:tahmasebi.pejmanSummary: Water coning is one of the common issues in subsurface systems in which water flows into the production well through perforated zones. This phenomenon can cause severe problems in wellbore and surface facilities. Thus, accurate prediction of water breakthrough can help to adapt to the production mode and avoid such issues. Conducting flow simulations, as a conventional approach, can be very time demanding if one deals with large subsurface systems. Furthermore, several types of data are often collected during the life of a subsurface system each of which can help to predict the breakthrough and water coning. As such, it is very important to produce similar results using the time sequence data gathered from various geo-sensing tools. In this paper, a deep long short-term memory (LSTM) model is developed to predict the water cut and water breakthrough time for multiple production wells in a water flooding case. The dataset is generated by the Egg model with a multi-input-multi-output system. We found that the proposed model can capture the general trend of variation for the water cut time sequence data for a complex subsurface system. To evaluate the performance of our data-driven method, the results are compared with vanilla recurrent neural network (RNN), deep gated recurrent unit (GRU), and artificial neural network (ANN). The conducted comparison indicates that the proposed deep LSTM model outperforms the other three approaches when the results are compared with the numerical data.A novel learning algorithm for Büchi automata based on family of DFAs and classification trees.https://www.zbmath.org/1453.680982021-02-27T13:50:00+00:00"Li, Yong"https://www.zbmath.org/authors/?q=ai:li.yong.9|li.yong.6|li.yong.7|li.yong.4|li.yong.8|li.yong.5|li.yong.3"Chen, Yu-Fang"https://www.zbmath.org/authors/?q=ai:chen.yu-fang"Zhang, Lijun"https://www.zbmath.org/authors/?q=ai:zhang.lijun"Liu, Depeng"https://www.zbmath.org/authors/?q=ai:liu.depengSummary: In this paper, we propose a novel algorithm to learn a Büchi automaton from a teacher who knows an \(\omega \)-regular language. The algorithm is based on learning a formalism named family of DFAs (FDFAs) recently proposed by
\textit{D. Angluin} and \textit{D. Fisman} [Lect. Notes Comput. Sci. 8776, 125--139 (2014; Zbl 1353.68149); Theor. Comput. Sci. 650, 57--72 (2016; Zbl 1362.68118)].
The main catch is that we use a classification tree structure instead of the standard observation table structure. The worst case storage space required by our algorithm is quadratically better than the table-based algorithm proposed in [loc. cit.]. We implement the first publicly available library ROLL (Regular Omega Language Learning), which consists of all \(\omega \)-regular learning algorithms available in the literature and the new algorithms proposed in this paper. Experimental results show that our tree-based algorithms have the best performance among others regarding the number of solved learning tasks.
For the entire collection see [Zbl 1360.68015].Adaptive activation functions accelerate convergence in deep and physics-informed neural networks.https://www.zbmath.org/1453.681652021-02-27T13:50:00+00:00"Jagtap, Ameya D."https://www.zbmath.org/authors/?q=ai:jagtap.ameya-d"Kawaguchi, Kenji"https://www.zbmath.org/authors/?q=ai:kawaguchi.kenji"Karniadakis, George Em"https://www.zbmath.org/authors/?q=ai:karniadakis.george-emSummary: We employ adaptive activation functions for regression in deep and physics-informed neural networks (PINNs) to approximate smooth and discontinuous functions as well as solutions of linear and nonlinear partial differential equations. In particular, we solve the nonlinear Klein-Gordon equation, which has smooth solutions, the nonlinear Burgers equation, which can admit high gradient solutions, and the Helmholtz equation. We introduce a scalable hyper-parameter in the activation function, which can be optimized to achieve best performance of the network as it changes dynamically the topology of the loss function involved in the optimization process. The adaptive activation function has better learning capabilities than the traditional one (fixed activation) as it improves greatly the convergence rate, especially at early training, as well as the solution accuracy. To better understand the learning process, we plot the neural network solution in the frequency domain to examine how the network captures successively different frequency bands present in the solution. We consider both forward problems, where the approximate solutions are obtained, as well as inverse problems, where parameters involved in the governing equation are identified. Our simulation results show that the proposed method is a very simple and effective approach to increase the efficiency, robustness and accuracy of the neural network approximation of nonlinear functions as well as solutions of partial differential equations, especially for forward problems. We theoretically prove that in the proposed method, gradient descent algorithms are not attracted to suboptimal critical points or local minima. Furthermore, the proposed adaptive activation functions are shown to accelerate the minimization process of the loss values in standard deep learning benchmarks using CIFAR-10, CIFAR-100, SVHN, MNIST, KMNIST, Fashion-MNIST, and Semeion datasets with and without data augmentation.Simulator-free solution of high-dimensional stochastic elliptic partial differential equations using deep neural networks.https://www.zbmath.org/1453.650212021-02-27T13:50:00+00:00"Karumuri, Sharmila"https://www.zbmath.org/authors/?q=ai:karumuri.sharmila"Tripathy, Rohit"https://www.zbmath.org/authors/?q=ai:tripathy.rohit-k"Bilionis, Ilias"https://www.zbmath.org/authors/?q=ai:bilionis.ilias"Panchal, Jitesh"https://www.zbmath.org/authors/?q=ai:panchal.jitesh-hSummary: Stochastic partial differential equations (SPDEs) are ubiquitous in engineering and computational sciences. The stochasticity arises as a consequence of uncertainty in input parameters, constitutive relations, initial/boundary conditions, etc. Because of these functional uncertainties, the stochastic parameter space is often high-dimensional, requiring hundreds, or even thousands, of parameters to describe it. This poses an insurmountable challenge to response surface modeling since the number of forward model evaluations needed to construct an accurate surrogate grows exponentially with the dimension of the uncertain parameter space; a phenomenon referred to as the \textit{curse of dimensionality}. State-of-the-art methods for high-dimensional uncertainty propagation seek to alleviate the curse of dimensionality by performing dimensionality reduction in the uncertain parameter space. However, one still needs to perform forward model evaluations that potentially carry a very high computational burden. We propose a novel methodology for high-dimensional uncertainty propagation of elliptic SPDEs which lifts the requirement for a deterministic forward solver. Our approach is as follows. We parameterize the solution of the elliptic SPDE using a deep residual network (ResNet). In a departure from traditional squared residual (SR) based loss function for training the ResNet, we introduce a physics-informed loss function derived from variational principles. Specifically, our loss function is the expectation of the energy functional of the PDE over the stochastic variables. We demonstrate our solver-free approach through various examples where the elliptic SPDE is subjected to different types of high-dimensional input uncertainties. Also, we solve high-dimensional uncertainty propagation and inverse problems.Solving electrical impedance tomography with deep learning.https://www.zbmath.org/1453.650412021-02-27T13:50:00+00:00"Fan, Yuwei"https://www.zbmath.org/authors/?q=ai:fan.yuwei"Ying, Lexing"https://www.zbmath.org/authors/?q=ai:ying.lexingSummary: This paper introduces a new approach for solving electrical impedance tomography (EIT) problems using deep neural networks. The mathematical problem of EIT is to invert the electrical conductivity from the Dirichlet-to-Neumann (DtN) map. Both the forward map from the electrical conductivity to the DtN map and the inverse map are high-dimensional and nonlinear. Motivated by the linear perturbative analysis of the forward map and based on a numerically low-rank property, we propose compact neural network architectures for the forward and inverse maps for both 2D and 3D problems. Numerical results demonstrate the efficiency of the proposed neural networks.Dealing with negative conditions in automated proving: tools and challenges. The unexpected consequences of Rabinowitsch's trick.https://www.zbmath.org/1453.682092021-02-27T13:50:00+00:00"Ladra, Manuel"https://www.zbmath.org/authors/?q=ai:ladra-gonzalez.manuel"Páez-Guillán, Pilar"https://www.zbmath.org/authors/?q=ai:paez-guillan.pilar"Recio, Tomás"https://www.zbmath.org/authors/?q=ai:recio.tomasSummary: In the algebraic-geometry-based theory of automated proving and discovery, it is often required that the user includes, as complementary hypotheses, some intuitively obvious non-degeneracy conditions. Traditionally there are two main procedures to introduce such conditions into the hypotheses set. The aim of this paper is to present these two approaches, namely Rabinowitsch's trick and the ideal saturation computation, and to discuss in detail the close relationships and subtle differences that exist between them, highlighting the advantages and drawbacks of each one. We also present a carefully developed example which illustrates the previous discussion. Moreover, the paper will analyze the impact of each of these two methods in the formulation of statements with negative theses, yielding rather unexpected results if Rabinowitsch's trick is applied. All the calculations have been carried out using the software Singular in the FinisTerrae 2 supercomputer.The range of all regularities for polynomial ideals with a given Hilbert function.https://www.zbmath.org/1453.130782021-02-27T13:50:00+00:00"Cioffi, Francesca"https://www.zbmath.org/authors/?q=ai:cioffi.francescaLet \(A\) be a polynomial ring over a field \(K\) and let \(I\) denote any homogeneous ideal of \(A\). This paper presents a proof of the following statement suggested by Le Tuan Hoa in a private communication,
Theorem 1: Given an Hilbert function \(u\), the set \(\{reg(I) : A/I \text{ has Hilbert function } u \}\) is an interval of integers.
Since for every homogenous ideal \(I\), there exists a strongly stable ideal \(J\) such that \(reg(I)=reg(J)\) and \(reg(J)\) is the highest degree, denoted by \(\nu(J)\), of a minimal generator of \(J\), the theorem 1 is equivalent to the following one,
Theorem 2: Given an Hilbert function \(u\), the set \(\{\nu(J) : J \text{ is strongly stable and }A/J \text{ has Hilbert function } u\}\) is an interval of integers.
The proof is based on constructive arguments presented in the paper [\textit{F. Cioffi} et al., Exp. Math. 24, No. 4, 424--437 (2015; Zbl 1333.13028)], mainly on a construction called \textit{expanded lifting}, that allows, given a saturated homogeneous ideal \(I\) with Hilbert function \(\preceq u\), to create a homogenous ideal \(J\) with Hilbert function \(u\).
Reviewer: Gema Maria Diaz Toca (Murcia)Mixed-effects random forest for clustered data.https://www.zbmath.org/1453.625432021-02-27T13:50:00+00:00"Hajjem, Ahlem"https://www.zbmath.org/authors/?q=ai:hajjem.ahlem"Bellavance, François"https://www.zbmath.org/authors/?q=ai:bellavance.francois"Larocque, Denis"https://www.zbmath.org/authors/?q=ai:larocque.denisSummary: This paper presents an extension of the random forest (RF) method to the case of clustered data. The proposed `mixed-effects random forest' (MERF) is implemented using a standard RF algorithm within the framework of the expectation-maximization algorithm. Simulation results show that the proposed MERF method provides substantial improvements over standard RF when the random effects are non-negligible. The use of the method is illustrated to predict the first-week box office revenues of movies.An encoding and reconstructing method with robust transmission for 3D model topological data over wireless network.https://www.zbmath.org/1453.680762021-02-27T13:50:00+00:00"Wang, Huiqin Q."https://www.zbmath.org/authors/?q=ai:wang.huiqin-q"Yang, Bailin L."https://www.zbmath.org/authors/?q=ai:yang.bailin-l"Qin, Aihong H."https://www.zbmath.org/authors/?q=ai:qin.aihong-hSummary: This paper proposes an encoding and reconstructing method with robust transmission for 3D model topological data. For the encoding of topological data, we firstly adopt the valence-driven method to visit the full mesh and give each vertex a sequence number, degree information and the neighbouring vertex information. Then, we adopt the improved graph colouring algorithm to colour each vertex according to the sequence number of vertex. Thirdly, we divide the 3D model into several packets according to vertex's colour. Finally, we use the Huffman coding to compress model. For the reconstructing of topological data, when the client receives packets, we design an effective reconstructing method by combining degree information, the neighbouring vertex information with special traversal rule to reconstruct the model. Experimental results show that even in the high packet loss rate, the model can also obtain a good reconstruction result.The persistence landscape and some of its properties.https://www.zbmath.org/1453.628072021-02-27T13:50:00+00:00"Bubenik, Peter"https://www.zbmath.org/authors/?q=ai:bubenik.peterSummary: Persistence landscapes map persistence diagrams into a function space, which may often be taken to be a Banach space or even a Hilbert space. In the latter case, it is a feature map and there is an associated kernel. The main advantage of this summary is that it allows one to apply tools from statistics and machine learning. Furthermore, the mapping from persistence diagrams to persistence landscapes is stable and invertible. We introduce a weighted version of the persistence landscape and define a one-parameter family of Poisson-weighted persistence landscape kernels that may be useful for learning. We also demonstrate some additional properties of the persistence landscape. First, the persistence landscape may be viewed as a tropical rational function. Second, in many cases it is possible to exactly reconstruct all of the component persistence diagrams from an average persistence landscape. It follows that the persistence landscape kernel is characteristic for certain generic empirical measures. Finally, the persistence landscape distance may be arbitrarily small compared to the interleaving distance.
For the entire collection see [Zbl 1448.62008].Necessary and sufficient preconditions via eager abstraction.https://www.zbmath.org/1453.680562021-02-27T13:50:00+00:00"Seghir, Mohamed Nassim"https://www.zbmath.org/authors/?q=ai:seghir.mohamed-nassim"Schrammel, Peter"https://www.zbmath.org/authors/?q=ai:schrammel.peterSummary: The precondition for safe execution of a procedure is useful for understanding, verifying and debugging programs. We have previously presented a \textsc{cegar}-based approach for inferring necessary and sufficient preconditions based on the iterative abstraction-refinement of the set of safe and unsafe states until they become disjoint [\textit{M. N. Seghir} and \textit{D. Kroening}, Lect. Notes Comput. Sci. 7792, 451--471 (2013; Zbl 1381.68180)]. A drawback of that approach is that safe and unsafe traces are explored separately and each time they are built entirely before being checked for consistency. In this paper, we present an eager approach that explores shared prefixes between safe and unsafe traces conjointly. As a result, individual state sets, by construction, fulfil the property of separation between safe and unsafe states without requiring any refinement. Experiments using our implementation of this technique in the precondition generator P-Gen show a significant improvement compared to our previous \textsc{cegar}-based method. In some cases the running time drops from several minutes to several seconds.
For the entire collection see [Zbl 1327.68021].Multi-step quantile regression tree.https://www.zbmath.org/1453.624522021-02-27T13:50:00+00:00"Chang, Youngjae"https://www.zbmath.org/authors/?q=ai:chang.youngjaeSummary: Quantile regression (QR) proposed by \textit{R. Koenker} and \textit{G. Bassett jun.} [Econometrica 46, 33--50 (1978; Zbl 0373.62038)] is a statistical technique that estimates conditional quantiles. It has been widely studied and applied to economics. \textit{N. Meinshausen} [J. Mach. Learn. Res. 7, 983--999 (2006; Zbl 1222.68262)] proposed quantile regression forests (QRF), a non-parametric way based on random forest. QRF performs well in terms of prediction accuracy, but it struggles with noisy data sets. This motivates us to propose a multi-step QR tree method using GUIDE (Generalized, Unbiased, Interaction Detection and Estimation) made by \textit{W.-Y. Loh} [Stat. Sin. 12, No. 2, 361--386 (2002; Zbl 0998.62042)]. Our simulation study shows that the multi-step QR tree performs better than a single tree or QRF especially when it deals with data sets having many irrelevant variables.One-position inheritance based cuckoo search algorithm.https://www.zbmath.org/1453.902152021-02-27T13:50:00+00:00"Wang, Lijin"https://www.zbmath.org/authors/?q=ai:wang.lijin"Zhong, Yiwen"https://www.zbmath.org/authors/?q=ai:zhong.yiwenSummary: In this study, we extend cuckoo search algorithm that incorporates one-position inheritance mechanism, called OPICS. After searching for new solutions in Lévy flights random walk, OPICS employs one-position inheritance mechanism to ensure that these new solutions learn one-dimensional information from the random solution. Learning extra information can help these new solutions escaping from a local optimum. Moreover, the one-position inheritance mechanism helps the best solution participate in searching for a new solution. Extensive experiments, which are carried on 28 benchmark functions with different properties, demonstrate the improvement in effectiveness and efficiency of OPICS.The irreducible characters of the Sylow \(p\)-subgroups of the Chevalley groups \(\mathrm{D}_{6}(p^f)\) and \(\mathrm{E}_{6}(p^f)\).https://www.zbmath.org/1453.200252021-02-27T13:50:00+00:00"Le, Tung"https://www.zbmath.org/authors/?q=ai:le.tung"Magaard, Kay"https://www.zbmath.org/authors/?q=ai:magaard.kay"Paolini, Alessandro"https://www.zbmath.org/authors/?q=ai:paolini.alessandroSummary: We parametrize the set of irreducible characters of the Sylow \(p\)-subgroups of the Chevalley groups \(\mathrm{D}_6(q)\) and \(\mathrm{E}_6(q)\), for an arbitrary power \(q\) of any prime \(p\). In particular, we establish that the parametrization is uniform for \(p \geq 3\) in type \(\mathrm{D}_6\) and for \(p \geq 5\) in type \(\mathrm{E}_6\), while the prime 2 in type \(\mathrm{D}_6\) and the primes 2, 3 in type \(\mathrm{E}_6\) yield character degrees of the form \(q^m / p^i\) which force a departure from the generic situations. Also for the first time in our analysis we see a family of irreducible characters of a classical group of degree \(q^m / p^i\) where \(i > 1\) which occurs in type \(\mathrm{D}_6\).The use of permutation representations in structural computations in large finite matrix groups.https://www.zbmath.org/1453.200012021-02-27T13:50:00+00:00"Cannon, John J."https://www.zbmath.org/authors/?q=ai:cannon.john-j"Holt, Derek F."https://www.zbmath.org/authors/?q=ai:holt.derek-f"Unger, William R."https://www.zbmath.org/authors/?q=ai:unger.william-rSummary: We determine the minimal degree permutation representations of all finite groups with trivial soluble radical, and describe applications to structural computations in large finite matrix groups that use the output of the \texttt{CompositionTree} algorithm. We also describe how this output can be used to help find an effective base and strong generating set for such groups. We have implemented the resulting algorithms in \textsf{Magma}, and we report on their performance.Uniform page migration problem in Euclidean space.https://www.zbmath.org/1453.682192021-02-27T13:50:00+00:00"Khorramian, Amanj"https://www.zbmath.org/authors/?q=ai:khorramian.amanj"Matsubayashi, Akira"https://www.zbmath.org/authors/?q=ai:matsubayashi.akiraSummary: The page migration problem in Euclidean space is revisited. In this problem, online requests occur at any location to access a single page located at a server. Every request must be served, and the server has the choice to migrate from its current location to a new location in space. Each service costs the Euclidean distance between the server and request. A migration costs the distance between the former and the new server location, multiplied by the page size. We study the problem in the uniform model, in which the page has size \(D=1\). All request locations are not known in advance; however, they are sequentially presented in an online fashion. We design a \(2.75\)-competitive online algorithm that improves the current best upper bound for the problem with the unit page size. We also provide a lower bound of \(2.732\) for our algorithm. It was already known that 2.5 is a lower bound for this problem.Online submodular maximization with free disposal.https://www.zbmath.org/1453.682172021-02-27T13:50:00+00:00"Chan, T.-H. Hubert"https://www.zbmath.org/authors/?q=ai:chan.t-h-hubert"Huang, Zhiyi"https://www.zbmath.org/authors/?q=ai:huang.zhiyi"Jiang, Shaofeng H.-C."https://www.zbmath.org/authors/?q=ai:jiang.shaofeng-h-c"Kang, Ning"https://www.zbmath.org/authors/?q=ai:kang.ning"Tang, Zhihao Gavin"https://www.zbmath.org/authors/?q=ai:tang.zhihao-gavinA novel particle swarm optimisation with hybrid strategies.https://www.zbmath.org/1453.901892021-02-27T13:50:00+00:00"Chen, Rongfang"https://www.zbmath.org/authors/?q=ai:chen.rongfang"Tang, Jun"https://www.zbmath.org/authors/?q=ai:tang.junSummary: Particle swarm optimisation (PSO) is an efficient optimisation technique, which has shown good search performance on many optimisation problems. However, the standard PSO easily falls into local minima because particles are attracted by their previous best particles and the global best particle. Though the attraction can accelerate the search process, it results in premature convergence. To tackle this issue, a novel PSO algorithm with hybrid strategies is proposed in this paper. The new approach called HPSO employs two strategies: a new velocity updating model and generalised opposition-based learning (GOBL). To test the performance of HPSO, 12 benchmark functions including multimodal and rotated problems are used in the experiments. Computational results show that our approach achieves promising performance.Hybrid particle swarm optimisation with adaptively coordinated local searches for multimodal optimisation.https://www.zbmath.org/1453.902192021-02-27T13:50:00+00:00"Xu, Gang"https://www.zbmath.org/authors/?q=ai:xu.gang"Liu, Hao"https://www.zbmath.org/authors/?q=ai:liu.hao.1|liu.hao.2|liu.haoSummary: Particle swarm optimisation (PSO) is a population-based stochastic search algorithm. Two common criticisms exist. First, PSO suffers premature convergence. Second, several existing PSO variants are designed for a specific search space thus an algorithm performing well on a diverse set of problems is lacking. In this paper, we propose a hybrid particle swarm optimisation with adaptively coordinated local searches, called NMRM-PSO, to make up the above demerits. These local search algorithms are the Nelder mead algorithm and the Rosenbrock method. NMRM-PSO has two alternative phases: the exploration phase realised by PSO and the exploitation phase completed by two adaptively coordinated local searches. Experiment results show that NMRM-PSO outperforms all of the tested PSO algorithms on most of multimodal functions in terms of solution quality, convergence speed and success rate.Regression-based sparse polynomial chaos for uncertainty quantification of subsurface flow models.https://www.zbmath.org/1453.762082021-02-27T13:50:00+00:00"Tarakanov, Alexander"https://www.zbmath.org/authors/?q=ai:tarakanov.alexander-o"Elsheikh, Ahmed H."https://www.zbmath.org/authors/?q=ai:elsheikh.ahmed-hSummary: Surrogate-modeling techniques including Polynomial Chaos Expansion (PCE) is commonly used for statistical estimation (aka. Uncertainty Quantification) of quantities of interests obtained from expensive computational models. PCE is a data-driven regression-based technique that relies on spectral polynomials as basis-functions. In this technique, the outputs of few numerical simulations are used to estimate the PCE coefficients within a regression framework combined with regularization techniques where the regularization parameters are estimated using standard cross-validation as applied in supervised machine learning methods.
In the present work, we introduce an efficient method for estimating the PCE coefficients combining Elastic Net regularization with a data-driven feature ranking approach. Our goal is to increase the probability of identifying the most significant PCE components by assigning each of the PCE coefficients a numerical value reflecting the magnitude of the coefficient and its stability with respect to perturbations in the input data. In our evaluations, the proposed approach has shown high convergence rate for high-dimensional problems, where standard feature ranking might be challenging due to the curse of dimensionality.
The presented method is implemented within a standard machine learning library (scikit-learn [\textit{F. Pedregosa} et al., J. Mach. Learn. Res. 12, 2825--2830 (2011; Zbl 1280.68189)]) allowing for easy experimentation with various solvers and regularization techniques (e.g. Tikhonov, LASSO, LARS, Elastic Net) and enabling automatic cross-validation techniques using a widely used and well tested implementation. We present a set of numerical tests on standard analytical functions, a two-phase subsurface flow model and a simulation dataset for CO2 sequestration in a saline aquifer. For all test cases, the proposed approach resulted in a significant increase in PCE convergence rates.Using Oshima splines to produce accurate numerical results and high quality graphical output.https://www.zbmath.org/1453.682252021-02-27T13:50:00+00:00"Takato, Setsuo"https://www.zbmath.org/authors/?q=ai:takato.setsuo"Vallejo, José A."https://www.zbmath.org/authors/?q=ai:vallejo.jose-antonioSummary: We illustrate the use of Oshima splines in producing high-quality \LaTeX output in two cases: first, the numerical computation of derivatives and integrals, and second, the display of silhouettes and wireframe surfaces, using the macros package KeTCindy. Both cases are of particular interest for college and university teachers wanting to create handouts to be used by students, or drawing figures for a research paper. When dealing with numerical computations, KeTCindy can make a call to the CAS Maxima to check for accuracy; in the case of surface graphics, it is particularly important to be able to detect intersections of projected curves, and we show how to do it in a seamlessly manner using Oshima splines in KeTCindy. A C compiler can be called in this case to speed up computations.Exploration of dual curves using a dynamic geometry and computer algebra system.https://www.zbmath.org/1453.682232021-02-27T13:50:00+00:00"Hašek, Roman"https://www.zbmath.org/authors/?q=ai:hasek.romanSummary: This submission presents a particular example of the use of the free dynamic mathematics software GeoGebra to determine the dual curve to the given curve and to inspect its properties. The example is aimed at students of mathematics teaching. The combination of dynamic geometry tools with computer algebra functions, namely the functions for the computation of Groebner bases for polynomial ideals and a function for eliminating variables from the system of polynomial equations, based also on the method of Groebner bases, allows them to apply naturally the knowledge they acquire during their study of mathematics.Low-rank models in visual analysis.https://www.zbmath.org/1453.680032021-02-27T13:50:00+00:00"Lin, Zhouchen"https://www.zbmath.org/authors/?q=ai:lin.zhouchen"Zhang, Hongyang"https://www.zbmath.org/authors/?q=ai:zhang.hongyangPublisher's description: The book presents the state-of-the-art on low-rank models and their application to visual analysis. It provides insight into the ideas behind the models and their algorithms, giving details of their formulation and deduction. The main applications included are video denoising, background modeling, image alignment and rectification, motion segmentation, image segmentation and image saliency detection. Readers will learn which Low-rank models are highly useful in practice (both linear and nonlinear models), how to solve low-rank models efficiently, and how to apply low-rank models to real problems.
Key features:
\begin {itemize}
\item Presents a self-contained, up-to-date introduction that covers underlying theory, algorithms and the state-of-the-art in current applications
\item Provides a full and clear explanation of the theory behind the models
\item Includes detailed proofs in the appendices
\end {itemize}The research of e-mail virus spread based on complex network.https://www.zbmath.org/1453.680202021-02-27T13:50:00+00:00"Li, Kangshun"https://www.zbmath.org/authors/?q=ai:li.kangshun"Tan, Yuefeng"https://www.zbmath.org/authors/?q=ai:tan.yuefeng"Zhang, Wensheng"https://www.zbmath.org/authors/?q=ai:zhang.wensheng"Wei, Wenhua"https://www.zbmath.org/authors/?q=ai:wei.wenhuaSummary: By analysing the known e-mail virus propagation model, a new e-mail virus propagation model is proposed based on a complex network. Taking human factor into consideration legitimately, a new infective rate function is defined in this paper, which is different from existing models that simplify influence factors into a single parameter. After the model is established, experimental analyses illustrate that the critical moment appears when virus reaches its maximum propagation speed. Eliminating the virus before this critical moment or delaying the arrival of this critical moment is proved to be very effective in controlling virus spread. Experimental results show that compared with traditional virus propagation models the proposed model is more reasonable. Meanwhile, the experiment further verifies the correctness of the constructed models which provides useful theoretical basis for studying the spread of the virus prevention.Utilising improved cellular automata to simulate and analyse image evolution and optimisation.https://www.zbmath.org/1453.681082021-02-27T13:50:00+00:00"Dong, Wenyong"https://www.zbmath.org/authors/?q=ai:dong.wenyong"Dong, Xueshi"https://www.zbmath.org/authors/?q=ai:dong.xueshiSummary: This paper introduces the cellular automata, current research of the field and so on, comprehensively uses principal component analysis, decision tree and neural network methods to improve cellular automata model and makes simulation and analysis of the spatial evolution of experiment image by using the improved models, then from optimisation strategy and other aspects it gives some improvement discussion and optimisation methods.Face detection based on particle swarm optimisation-free entropy minimisation.https://www.zbmath.org/1453.902112021-02-27T13:50:00+00:00"Shen, Xianjun"https://www.zbmath.org/authors/?q=ai:shen.xianjun"Yi, Yang"https://www.zbmath.org/authors/?q=ai:yi.yang"Dong, Wenyong"https://www.zbmath.org/authors/?q=ai:dong.wenyong"Guo, Shuaiyu"https://www.zbmath.org/authors/?q=ai:guo.shuaiyu"Li, Junyan"https://www.zbmath.org/authors/?q=ai:li.junyan"Chen, Fan"https://www.zbmath.org/authors/?q=ai:chen.fanSummary: For BP neural network has some defects such as slow convergence rate, relatively flat error surface, and easily getting into local minimum. In this paper, it proposes a modified particle swarm optimisation algorithm based on the principle of free entropy minimisation (PSO-FEM), which is used to optimise the BP neural network for face detection. By drawing on the concept of the entropy increase in statistical mechanics, we view the particle swarm as a closed particle system with freedom movement in the solution space, correspond the energy minimisation of the system to the minimum value the particle swarm optimisation algorithm converges to, and correspond the system entropy increase process to the diversified process that the swarm maintains. Simulation results demonstrate that the PSO-FEM algorithm not only obtains ideal recognition results with multiple face image detection in complex background but also has high recognition correctness.A memetic algorithm for the max-cut problem.https://www.zbmath.org/1453.681392021-02-27T13:50:00+00:00"Lin, Geng"https://www.zbmath.org/authors/?q=ai:lin.geng"Zhu, Wenxing"https://www.zbmath.org/authors/?q=ai:zhu.wenxingSummary: Given an undirected graph \(G = (V, E)\) with a set \(V\) of vertices, and a set \(E\) of edges with weights, the max-cut problem consists of partitioning all vertices into two independent sets such that the sum of the weights of the edges between different sets is maximised. The max-cut problem is an NP-hard problem. An efficient memetic algorithm is proposed in this paper for the problem. The proposed memetic algorithm uses a local search procedure and a new crossover operator based on the encoding characteristic of the max-cut problem to generate new offsprings. Then the algorithm uses a function, which takes into account both the solution quality and the diversity of population, to control the population updating. Experiments were performed on three sets of benchmark instances of size up to 10,000 vertices. Experiment results and comparisons demonstrate the effectiveness of the proposed algorithm in both solution quality and computational time.Enhanced social emotional optimisation algorithm with generalised opposition-based learning.https://www.zbmath.org/1453.901982021-02-27T13:50:00+00:00"Guo, Zhaolu"https://www.zbmath.org/authors/?q=ai:guo.zhaolu"Yue, Xuezhi"https://www.zbmath.org/authors/?q=ai:yue.xuezhi"Zhang, Kejun"https://www.zbmath.org/authors/?q=ai:zhang.kejun"Deng, Changshou"https://www.zbmath.org/authors/?q=ai:deng.changshou"Liu, Songhua"https://www.zbmath.org/authors/?q=ai:liu.songhuaSummary: Social emotional optimisation algorithm (SEOA) is a newly developed evolutionary algorithm, which exhibits excellent performance for various engineering problems in real-world applications. However, SEOA may easily trap into local optima when solving complex multimodal function optimisation problems. This paper proposes a novel social emotional optimisation algorithm, called GOSEOA, which performs the generalised opposition-based learning (GOBL) strategy with a certain probability during the evolution process. The proposed algorithm uses the generalised opposition-based learning strategy to transform the current population to a generalised opposition-based population. Accordingly, the current population and the generalised opposition-based population are simultaneously considered to increase the probability for finding the global optimum. Experiments conducted on a comprehensive set of benchmark functions indicate that GOSEOA can obtain promising performance on the majority of the test functions.Enhanced differential evolution with generalised opposition-based learning and orientation neighbourhood mining.https://www.zbmath.org/1453.902142021-02-27T13:50:00+00:00"Wang, Jing"https://www.zbmath.org/authors/?q=ai:wang.jing.6|wang.jing.13|wang.jing|wang.jing.2|wang.jing.14|wang.jing.15|wang.jing.17|wang.jing.1|wang.jing.3|wang.jing.11|wang.jing.16|wang.jing.5Summary: This paper presents an enhanced differential evolution (DE) algorithm which employs two strategies including generalised opposition-based learning (GOBL) and orientation neighbourhood mining (ONM). The two strategies are very helpful to balance global search ability and local search ability of algorithm. To verify the performance of our approaches, seven famous benchmark functions are utilised. Conducted experiments indicate that the enhanced algorithm performs significantly better than, or at least comparable to, several state-of-the-art DE variants.Optimized compilation of multiset rewriting with comprehensions.https://www.zbmath.org/1453.680522021-02-27T13:50:00+00:00"Lam, Edmund Soon Lee"https://www.zbmath.org/authors/?q=ai:lam.edmund-soon-lee"Cervesato, Iliano"https://www.zbmath.org/authors/?q=ai:cervesato.ilianoSummary: We extend the rule-based, multiset rewriting language CHR with multiset comprehension patterns. Multiset comprehension provides the programmer with the ability to write multiset rewriting rules that can match a variable number of entities in the state. This enables implementing algorithms that coordinate large amounts of data or require aggregate operations in a declarative way, and results in code that is more concise and readable than with pure CHR. We call this extension \({\mathrm {CHR}^\mathrm{cp}}\). In this paper, we formalize the operational semantics of \({\mathrm {CHR}^\mathrm{cp}}\) and define a low-level optimizing compilation scheme based on join ordering for the efficient execution of programs. We provide preliminary empirical results that demonstrate the scalability and effectiveness of this approach.
For the entire collection see [Zbl 1327.68021].A hybrid cooperative cuckoo search algorithm with particle swarm optimisation.https://www.zbmath.org/1453.902162021-02-27T13:50:00+00:00"Wang, Lijin"https://www.zbmath.org/authors/?q=ai:wang.lijin"Zhong, Yiwen"https://www.zbmath.org/authors/?q=ai:zhong.yiwen"Yin, Yilong"https://www.zbmath.org/authors/?q=ai:yin.yilongSummary: This paper proposes an improved hybrid cooperative algorithm that combines cooperative cuckoo search algorithm and particle swarm optimisation, called HCCSPSO. The cooperative co-evolutionary framework is applied to cuckoo search algorithm to implement dimensional cooperation. The particle swarm optimisation algorithm, viewed as a cooperative component, is embedded in the back of the cuckoo search algorithm. During iteration, the best solution obtained by the previous cooperative component is randomly embedded in the last one to avoid the pseudo-minima produced by the previous one, while the subcomponents of best solution from the last cooperative component are also randomly planted in the subcomponents of the previous one. The results of experimental simulations demonstrate the improvement in the efficiency and the effect of the cooperation strategy, and the promising of HCCSPSO.Performance analysis for clustering algorithms.https://www.zbmath.org/1453.625562021-02-27T13:50:00+00:00"Xue, Yu"https://www.zbmath.org/authors/?q=ai:xue.yu"Zhao, Binping"https://www.zbmath.org/authors/?q=ai:zhao.binping"Ma, Tinghuai"https://www.zbmath.org/authors/?q=ai:ma.tinghuaiSummary: There are lots of algorithms for optimal clustering. The main part of clustering algorithms includes the K-means, fuzzy c-means (FCM) and evolution algorithm. The main purpose of this paper was to research the performance and characteristics of these three types of algorithms. One criteria (clustering validity index), namely TRW, was used in the optimisation of classification and eight real-world datasets (glass, wine, ionosphere, biodegradation, connectionist bench, hill-valley, musk, madelon datasets), whose dimension became higher, were applied. We made a performance analysis and concluded that it was easy of the K-means and FCM to fall into a local minimum, and the hybrid algorithm was found more reliable and more efficient, especially on difficult tasks with high dimension.Control scaling factor of cuckoo search algorithm using learning automata.https://www.zbmath.org/1453.902032021-02-27T13:50:00+00:00"Lin, Yaohua"https://www.zbmath.org/authors/?q=ai:lin.yaohua"Wang, Lijin"https://www.zbmath.org/authors/?q=ai:wang.lijin"Zhong, Yiwen"https://www.zbmath.org/authors/?q=ai:zhong.yiwen"Zhang, Cuiping"https://www.zbmath.org/authors/?q=ai:zhang.cuipingSummary: In this study, we seek an optimal scaling factor of cuckoo search algorithm by using learning automata. In the presented method, the same learning automaton is built for each individual, and a set of actions of each learning automaton are set to several constant scaling factors. Moreover, the linear reward-penalty learning algorithm is used in learning automaton to select the optimal scaling factor of each individual. Extensive experiments on 20 benchmark functions demonstrate better effectiveness and efficiency of controlling scaling factor of cuckoo search by using learning automata.Improved multi-strategy artificial bee colony algorithm.https://www.zbmath.org/1453.902082021-02-27T13:50:00+00:00"Lv, Li"https://www.zbmath.org/authors/?q=ai:lv.li"Wu, Lieyang"https://www.zbmath.org/authors/?q=ai:wu.lieyang"Zhao, Jia"https://www.zbmath.org/authors/?q=ai:zhao.jia.1|zhao.jia"Wang, Hui"https://www.zbmath.org/authors/?q=ai:wang.hui.6|wang.hui.1|wang.hui.2|wang.hui.5|wang.hui.4|wang.hui"Wu, Runxiu"https://www.zbmath.org/authors/?q=ai:wu.runxiu"Fan, Tanghuai"https://www.zbmath.org/authors/?q=ai:fan.tanghuai"Hu, Min"https://www.zbmath.org/authors/?q=ai:hu.min"Xie, Zhifeng"https://www.zbmath.org/authors/?q=ai:xie.zhifengSummary: Artificial bee colony (ABC) algorithm is a nature-inspired metaheuristic based on imitating the foraging behaviour of bee, which is widely used in solving complex multi-dimensional optimisation problems. In order to overcome the drawbacks of standard ABC, such as slow convergence and low solution accuracy, we propose an improved multi-strategy artificial bee colony algorithm (MSABC). According to the type of position information in ABC, three basic search mechanisms are summarised, the mechanisms include searching around the individual, the random neighbour and the global best solution. Then, the basic search mechanisms are improved to obtain three search strategies. Each bee randomly selects a search strategy to produce a candidate solution under the same probability in each iteration. Thus these strategies can make a good balance between exploration and exploitation. Finally, the experiments are conducted on eight classical functions. Results show that our algorithm performs significantly better than several recently proposed similar algorithms in terms of the convergence speed and solution accuracy.The automatic detection of token structures and invariants using SAT checking.https://www.zbmath.org/1453.681092021-02-27T13:50:00+00:00"Antonino, Pedro"https://www.zbmath.org/authors/?q=ai:antonino.pedro"Gibson-Robinson, Thomas"https://www.zbmath.org/authors/?q=ai:gibson-robinson.thomas"Roscoe, A. W."https://www.zbmath.org/authors/?q=ai:roscoe.andrew-williamSummary: Many distributed systems rely on token structures for their correct operation. Often, these structures make sure that a fixed number of tokens exists at all times, or perhaps that tokens cannot be completely eliminated, to prevent systems from reaching undesired states. In this paper we show how a SAT checker can be used to automatically detect token and similar invariants in distributed systems, and how these invariants can improve the precision of a deadlock-checking framework that is based on local analysis. We demonstrate by a series of practical experiments that this new framework is as efficient as similar incomplete techniques for deadlock-freedom analysis, while handling a different class of systems.
For the entire collection see [Zbl 1360.68016].The object localisation based on multiple image information for a humanoid robot.https://www.zbmath.org/1453.940062021-02-27T13:50:00+00:00"Du, Qinjun"https://www.zbmath.org/authors/?q=ai:du.qinjun"Wang, Yang"https://www.zbmath.org/authors/?q=ai:wang.yang.1|wang.yang"Xu, Shuai"https://www.zbmath.org/authors/?q=ai:xu.shuaiSummary: The stable and fast segmentation of the object image is a key technology for the humanoid robot in the complex background, which attributes to identification and localisation the target object. Only using single image information cannot guarantee to accurately segment the target object image. We selected image feature information using multi-sensor information fusion method to segment the target object. The depth, colour, shape and size are the ideal image information for the robots visual perception system. Using the humanoid robot stereo vision system and the depth, colour, shape and size of four types of image information, a fast and gradually approaching target area image segmentation method has been designed. With the identification and localisation target object image method, the humanoid robot can find and locate the target object. The experiments show that based on multi-image information features, the humanoid robot can locate the target object, and the localisation accuracy has been improved.Efficient dictionary matching by Aho-Corasick automata of truncated patterns.https://www.zbmath.org/1453.680662021-02-27T13:50:00+00:00"Zhang, Meng"https://www.zbmath.org/authors/?q=ai:zhang.meng"Fan, Jiashu"https://www.zbmath.org/authors/?q=ai:fan.jiashu"Chen, Dequan"https://www.zbmath.org/authors/?q=ai:chen.dequanSummary: We present a space-efficient data structure for dictionary matching. We truncate patterns to truncated patterns where symbols are \(\ell\)-length substrings of the pattern. By employing the AC automaton of truncated patterns and that of \(\ell\)-length substrings, we simulate the AC automaton of the original pattern set. The new structure is space economical as we apply the prefix merging to substrings of patterns. Using this structure, the dictionary matching runs in \(O(n \log k + \mathrm{tocc} \log k + \mathrm{occ})\) time where \(n\) is the length of the text, \(k\) the number of patterns, occ the number of occurrences of patterns in the text, and tocc the number of occurrences of strings that are longest prefix of each pattern with length of a multiple of \(\ell\).A multi-factor prediction algorithm in big data computing environments.https://www.zbmath.org/1453.681762021-02-27T13:50:00+00:00"Tang, Hao"https://www.zbmath.org/authors/?q=ai:tang.hao"Sun, Dawei"https://www.zbmath.org/authors/?q=ai:sun.daweiSummary: In big data environments, many problems become more huge and complex. This makes it harder for us to deal with the data and forecast the result. Traditional prediction algorithm is more suitable for less factor problem, and how to solve multi-factor problem is one of the major challenges. In this paper, a more accurate and effective algorithm was proposed based on the good performance of weight optimisation of PSO and generalisation ability of BPNN. Further, the convergence and feasibility of the combinational algorithm were analysed with actual data. The simulation revealed that with the increase of influencing factors, the BP neutral network optimised by PSO possessed a more rapid convergence rate of MSE and higher accuracy of network output values.On some generalised information measure of fuzzy directed divergence and decision making.https://www.zbmath.org/1453.910422021-02-27T13:50:00+00:00"Arora, Hari Darshan"https://www.zbmath.org/authors/?q=ai:arora.hari-darshan"Dhiman, Anjali"https://www.zbmath.org/authors/?q=ai:dhiman.anjaliSummary: The discrepancy of a probability distribution from another probability distribution is known as the measure of divergence. \textit{S. Kullback} and \textit{R. A. Leibler} [Ann. Math. Stat. 22, 79--86 (1951; Zbl 0042.38403)] were the first to introduce and based on Shannon entropy [\textit{C. E. Shannon}, Bell Syst. Tech. J. 27, 379--423, 623--656 (1948; Zbl 1154.94303)] which revolutionaries the study of communication theory. Fuzziness, a feature of uncertainty can be managed with the help of fuzzy set theory proposed by \textit{L. A. Zadeh} [Fuzzy Sets Syst. 1, 3--28 (1978; Zbl 0377.04002)]. Fuzziness and uncertainty both deals with various practical problems such as image segmentation, management decision making problems etc. Large amount of work has been done on characterisation of various entropies, divergence measures, fuzzy entropies and fuzzy divergence etc. \textit{D. Bhandari} and \textit{N. R. Pal} [Inf. Sci. 67, No. 3, 209--228 (1993; Zbl 0763.94030)] introduced fuzzy divergence which is relative to a fuzzy set A and to another fuzzy set B. A new measure of fuzzy directed divergence has been proposed. Further, applications of fuzzy directed divergence in decision making has also been discussed.DV-hop localisation algorithm with DDICS.https://www.zbmath.org/1453.902012021-02-27T13:50:00+00:00"Li, Feixiang"https://www.zbmath.org/authors/?q=ai:li.feixiang"Cui, Zhihua"https://www.zbmath.org/authors/?q=ai:cui.zhihua"Sun, Bin"https://www.zbmath.org/authors/?q=ai:sun.binSummary: Dimension by dimension improvement-based cuckoo search algorithm (DDICS) is a recent proposed variant of cuckoo search algorithm. The experimental results illustrate DDICS is superior to several other evolutionary algorithms when solving un-constraint numerical benchmarks. In this paper, we employ it to improve the performance of distance vector-hop method (DV-hop). The simulation results show it achieves the best performance when compared with two other DV-hop algorithms.Adaptive non-intrusive reduced order modeling for compressible flows.https://www.zbmath.org/1453.761182021-02-27T13:50:00+00:00"Yu, Jian"https://www.zbmath.org/authors/?q=ai:yu.jian"Yan, Chao"https://www.zbmath.org/authors/?q=ai:yan.chao"Jiang, Zhenhua"https://www.zbmath.org/authors/?q=ai:jiang.zhenhua"Yuan, Wu"https://www.zbmath.org/authors/?q=ai:yuan.wu"Chen, Shusheng"https://www.zbmath.org/authors/?q=ai:chen.shushengSummary: An adaptive non-intrusive reduced basis (RB) method based on Gaussian process regression (GPR) is proposed for parametrized compressible flows. Adaptivity is pursued in the offline stage. The reduced basis by proper orthogonal decomposition (POD) is constructed iteratively to achieve a specified tolerance. For GPR, active data selection is used at each iteration, with standard deviation as the error indicator. To improve accuracy for shock-dominated flows, a properly designed simplified problem (SP) is considered as input of the regression models in addition to using parameters directly. Furthermore, a surrogate error model is constructed to serve as an efficient error estimator for the GPR models. Several two- and three-dimensional cases are conducted, including the inviscid nozzle flow, the inviscid NACA0012 airfoil flow and the inviscid M6 wing flow. For all the cases, the trained models are able to make efficient predictions with reasonable accuracy in the online stage. The SP-based approach is observed to result in biased sampling towards transonic regions. The regression models are further applied in sensitivity analysis, from which the solution of the two-dimensional cases are shown to be significantly more sensitive to input parameters than the wing flow. This is consistent to the comparison of convergence histories between the parameter-based and the SP-based models. For cases of high sensitivity, the SP-based approach is superior and can help to significantly reduce the number of required snapshots to achieve a prescribed tolerance.Introduction to the special issue of the ECML PKDD 2020 journal track.https://www.zbmath.org/1453.680082021-02-27T13:50:00+00:00"Assent, Ira (ed.)"https://www.zbmath.org/authors/?q=ai:assent.ira"Domeniconi, Carlotta (ed.)"https://www.zbmath.org/authors/?q=ai:domeniconi.carlotta"Gionis, Aristides (ed.)"https://www.zbmath.org/authors/?q=ai:gionis.aristides"Hüllermeier, Eyke (ed.)"https://www.zbmath.org/authors/?q=ai:hullermeier.eykeFrom the text: The papers contained in this special issue have been accepted for the ECML PKDD 2020 journal track, which allows authors to combine a journal publication with a conference presentation of their work. The journal track was launched in 2013 and has accompanied the European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD) since then. It solicits high quality papers combining the timeliness and novelty of conference contributions with the maturity and sophistication of journal publications -- survey papers or extensions of previously published conference papers are normally excluded. Authors can submit to the Machine Learning Journal or the Data Mining and Knowledge Discovery Journal. This year, the journal track offered four submission deadlines between September 2019 and May 2020. Accepted papers were presented (virtually) at the ECML PKDD 2020 conference in Ghent, Belgium, from September 14--18, 2020.Computation of regular expression derivatives.https://www.zbmath.org/1453.681002021-02-27T13:50:00+00:00"Murugesan, N."https://www.zbmath.org/authors/?q=ai:murugesan.nagaraj"Sundaram, O. V. Shanmuga"https://www.zbmath.org/authors/?q=ai:shanmuga-sundaram.o-vSummary: The conversion of regular expressions into finite state automata and finite state automata into regular expression is an important area of research in automata theory. The notion of derivatives of regular expressions has been introduced to make the construction of finite state automata from regular expressions in a natural way. Recently, a general structure has been proposed to find the derivatives of regular expressions to reduce the difficulties arising out of the earlier methods. In this paper, some properties of the derivatives of regular expressions, the quotient language, and how the derivatives are found through a general structure, have been discussed in detail.Kernel methods for transfer learning to avoid negative transfer.https://www.zbmath.org/1453.625522021-02-27T13:50:00+00:00"Shao, Hao"https://www.zbmath.org/authors/?q=ai:shao.haoSummary: In the big data era, with the devolvement of information storage and processing capability of computers, new tasks become more and more complex and also with higher requirements. In the other side, out-of-date information of old tasks are abounded at will. Due to the considerable cost for classifying the emerged new tasks, transfer learning techniques are developed to extract useful knowledge from existing similar datasets, and a large number of research works have been published in recent years. However, an open problem in transfer learning is the negative transfer which happens due to different distributions among tasks. In this manuscript, we are targeting at proposing a kernel method to evaluate both the task relatedness and the instance similarities. The minimum description length principle (MDLP) is adopted which was proved effective for evaluating models in transfer learning scenario. Extensive experiments show the effectiveness of our algorithm in terms of the classification accuracy in real datasets.A computational model for knowledge extraction in uncertain textual data using Karnaugh map technique.https://www.zbmath.org/1453.628032021-02-27T13:50:00+00:00"Rajput, Dharmendra Singh"https://www.zbmath.org/authors/?q=ai:rajput.dharmendra-singh"Thakur, Ramjeevan Singh"https://www.zbmath.org/authors/?q=ai:thakur.ramjeevan-singh"Thakur, Ghanshyam Singh"https://www.zbmath.org/authors/?q=ai:thakur.ghanshyam-singhSummary: The present technology such as privacy-preserving data mining generates data, which is inherently uncertain in nature. There are other existing tools, which are also collecting data in an imprecise way. Further mining frequent patterns from uncertain textual data is not as simple as in precise data, and normal approaches that work for precise data are not applicable for uncertain data. This paper describes the motivation behind proposed method based on review of existing frequent termset mining techniques in document data. Further, a new mining method using karnaugh map is proposed for finding frequent termset from uncertain textual data, and experiment carried out requires only a single database scan for mining frequent patterns, which reduces to low processing time.Detecting troubled-cells on two-dimensional unstructured grids using a neural network.https://www.zbmath.org/1453.653012021-02-27T13:50:00+00:00"Ray, Deep"https://www.zbmath.org/authors/?q=ai:ray.deep"Hesthaven, Jan S."https://www.zbmath.org/authors/?q=ai:hesthaven.jan-sSummary: In a recent paper [ibid. 367, 166--191 (2018; Zbl 1415.65229)], we proposed a new type of troubled-cell indicator to detect discontinuities in the numerical solutions of one-dimensional conservation laws. This was achieved by suitably training an artificial neural network on canonical local solution structures for conservation laws. The proposed indicator was independent of problem-dependent parameters, giving it an advantage over existing limiter-based indicators. In the present paper, we extend this approach to train a similar network capable of detecting troubled-cells on two-dimensional unstructured grids. The proposed network has a smaller architecture compared to its one-dimensional predecessor, making it computationally efficient. Several numerical results are presented to demonstrate the performance of the new indicator.Enforcing constraints for interpolation and extrapolation in generative adversarial networks.https://www.zbmath.org/1453.681592021-02-27T13:50:00+00:00"Stinis, Panos"https://www.zbmath.org/authors/?q=ai:stinis.panos"Hagge, Tobias"https://www.zbmath.org/authors/?q=ai:hagge.tobias-j"Tartakovsky, Alexandre M."https://www.zbmath.org/authors/?q=ai:tartakovsky.alexandre-m"Yeung, Enoch"https://www.zbmath.org/authors/?q=ai:yeung.enochSummary: Generative Adversarial Networks (GANs) are becoming popular machine learning choices for training generators. At the same time there is a concerted effort in the machine learning community to expand the range of tasks in which learning can be applied as well as to utilize methods from other disciplines to accelerate learning. With this in mind, in the current work we suggest ways to enforce given constraints in the output of a GAN generator both for interpolation and extrapolation (prediction). For the case of dynamical systems, given a time series, we wish to train GAN generators that can be used to predict trajectories starting from a given initial condition. In this setting, the constraints can be in algebraic and/or differential form. Even though we are predominantly interested in the case of extrapolation, we will see that the tasks of interpolation and extrapolation are related. However, they need to be treated differently. For the case of interpolation, the incorporation of constraints is built into the training of the GAN. The incorporation of the constraints respects the primary game-theoretic setup of a GAN so it can be combined with existing algorithms. However, it can exacerbate the problem of instability during training that is well-known for GANs. We suggest adding small noise to the constraints as a simple remedy that has performed well in our numerical experiments. The case of extrapolation (prediction) is more involved. During training, the GAN generator learns to interpolate a noisy version of the data and we enforce the constraints. This approach has connections with model reduction that we can utilize to improve the efficiency and accuracy of the training. Depending on the form of the constraints, we may enforce them also during prediction through a projection step. We provide examples of linear and nonlinear systems of differential equations to illustrate the various constructions.Uniform, integral and efficient proofs for the determinant identities.https://www.zbmath.org/1453.030642021-02-27T13:50:00+00:00"Tzameret, Iddo"https://www.zbmath.org/authors/?q=ai:tzameret.iddo"Cook, Stephen A."https://www.zbmath.org/authors/?q=ai:cook.stephen-aA physics-aware, probabilistic machine learning framework for coarse-graining high-dimensional systems in the small data regime.https://www.zbmath.org/1453.624142021-02-27T13:50:00+00:00"Grigo, Constantin"https://www.zbmath.org/authors/?q=ai:grigo.constantin"Koutsourelakis, Phaedon-Stelios"https://www.zbmath.org/authors/?q=ai:koutsourelakis.phaedon-steliosSummary: The automated construction of coarse-grained models represents a pivotal component in computer simulation of physical systems and is a key enabler in various analysis and design tasks related to uncertainty quantification. Pertinent methods are severely inhibited by the high-dimension of the parametric input and the limited number of training input/output pairs that can be generated when computationally demanding forward models are considered. Such cases are frequently encountered in the modeling of \textit{random} heterogeneous media where the scale of the microstructure necessitates the use of high-dimensional random vectors and very fine discretizations of the governing equations. The present paper proposes a probabilistic Machine Learning framework that is capable of operating in the presence of Small Data by exploiting aspects of the physical structure of the problem as well as contextual knowledge. As a result, it can perform comparably well under \textit{extrapolative} conditions. It unifies the tasks of dimensionality and model-order reduction through an encoder-decoder scheme that simultaneously identifies a sparse set of salient lower-dimensional microstructural features and calibrates an inexpensive, coarse-grained model which is predictive of the output. Information loss is accounted for and quantified in the form of probabilistic predictive estimates. The learning engine is based on Stochastic Variational Inference. We demonstrate how the variational objectives can be used not only to train the coarse-grained model, but also to suggest refinements that lead to improved predictions.Local decoding and testing of polynomials over grids.https://www.zbmath.org/1453.941582021-02-27T13:50:00+00:00"Bafna, Mitali"https://www.zbmath.org/authors/?q=ai:bafna.mitali"Srinivasan, Srikanth"https://www.zbmath.org/authors/?q=ai:srinivasan.srikanth"Sudan, Madhu"https://www.zbmath.org/authors/?q=ai:sudan.madhuThis paper deals with low degree \(n\)-variant polynomials. One property of these
polynomials underlying many of the applications is the
``DeMillo-Lipton-Schwartz-Zippel'' distance lemma which gives an upper bound for the
number of zeroes that a low-degree polynomial may have over a domain of the
form \(A_1\times \cdots\times A_n\). This turn the space of polynomials into an
error-correcting code. Many applications based on this class of codes
have motivated polynomial time decoding algorithms, and ``local decoding'',
and ``local testing'' procedures for these codes. The paper initiates the study
of local decoding and testing algorithms for polynomials when \(|A_i| = 2\)
\((i=1,\ldots,n)\). It is proved that the space of degree 1 polynomials is not
locally decodable over fields of characteristic 0 or over fields of large
characteristic. On the other hand, it is proved that the class of degree \(d\)
polynomials over fields of small characteristic are locally decodable.
The space of functions from \(\{0,1\}\) to a field \(F\) obtaining by the evaluations
of \(n\)-variate polynomials over \(F\) of degree at most \(d\) on \(\{0,1\}\) is
denoted by \(\mathcal{F}(n,d)\). Finally, it is shown that \(\mathcal{F}(n,d)\) is
tolerantly locally testable over all fields.
Reviewer: Dimitros Poulakis (Thessaloniki)Tight bounds for popping algorithms.https://www.zbmath.org/1453.625422021-02-27T13:50:00+00:00"Guo, Heng"https://www.zbmath.org/authors/?q=ai:guo.heng"He, Kun"https://www.zbmath.org/authors/?q=ai:he.kunSummary: We sharpen run-time analysis for algorithms under the partial rejection sampling framework. Our method yields improved bounds for: the cluster-popping algorithm for approximating all-terminal network reliability; the cycle-popping algorithm for sampling rooted spanning trees; and the sink-popping algorithm for sampling sink-free orientations. In all three applications, our bounds are not only tight in order, but also optimal in constants.A note on dynamic Gröbner bases computation.https://www.zbmath.org/1453.130802021-02-27T13:50:00+00:00"Hashemi, Amir"https://www.zbmath.org/authors/?q=ai:hashemi.amir"Talaashrafi, Delaram"https://www.zbmath.org/authors/?q=ai:talaashrafi.delaramSummary: For most applications of Gröbner bases, one needs only a nice Gröbner basis of a given ideal and does not need to specify the monomial ordering. From a nice basis, we mean a basis with small size. For this purpose, \textit{P. Gritzmann} and \textit{B. Sturmfels} [SIAM J. Discrete Math. 6, No. 2, 246--269 (1993; Zbl 0798.68157)] introduced the method of dynamic Gröbner bases computation and also a variant of Buchberger's algorithm to compute a nice Gröbner basis. \textit{M. Caboara} and \textit{J. Perry} [Appl. Algebra Eng. Commun. Comput. 25, No. 1--2, 99--117 (2014; Zbl 1328.13038)] improved this approach by reducing the size and number of intermediate linear programs. In this paper, we improve the latter approach by proposing an algorithm to compute nicer Gröbner bases. The proposed algorithm has been implemented in \textsc{Sage} and its efficiency is discussed via a set of benchmark polynomials.
For the entire collection see [Zbl 1346.68010].Data-driven, variational model reduction of high-dimensional reaction networks.https://www.zbmath.org/1453.626372021-02-27T13:50:00+00:00"Katsoulakis, Markos A."https://www.zbmath.org/authors/?q=ai:katsoulakis.markos-a"Vilanova, Pedro"https://www.zbmath.org/authors/?q=ai:vilanova.pedroSummary: In this work we present new scalable, information theory-based variational methods for the efficient model reduction of high-dimensional deterministic and stochastic reaction networks. The proposed methodology combines, (a) information theoretic tools for sensitivity analysis that allow us to identify the proper coarse variables of the reaction network, with (b) variational approximate inference methods for training a best-fit reduced model. This approach takes advantage of both physicochemical modeling and data-based approaches and allows to construct optimal parameterized reduced dynamics in the number of variables, reactions and parameters, while controlling the information loss due to the reduction. We demonstrate the effectiveness of our model reduction method on several complex, high-dimensional chemical reaction networks arising in biochemistry.Efficient relative pose estimation for cameras and generalized cameras in case of known relative rotation angle.https://www.zbmath.org/1453.130822021-02-27T13:50:00+00:00"Martyushev, Evgeniy"https://www.zbmath.org/authors/?q=ai:martyushev.evgeniy-v"Li, Bo"https://www.zbmath.org/authors/?q=ai:li.bo|li.bo.2|li.bo.1The problem of relative pose estimation of a moving camera consists in determining the current camera pose, i.e. the position and orientation of the camera, with respect to a coordinate frame related to its previous position. The standard tool for this purpose is the 5-point algorithm, which is known to be minimal, because the associated polynomial ideal is in this case zero-dimensional.
In the paper under review, the authors deal with presenting two minimal solvers to this problem for a camera with known relative rotation angle. In this directions, they describe the 4-point algorithm for regular cameras as well as the similar 5-point approach for generalized cameras. These problems are indeed formulated in terms of polynomial equations and to find the solutions of the corresponding systems, effective methods based on Gröbner bases are applied. Experiments on synthetic and real datasets show that the described algorithms are numerically efficient.
Reviewer: Amir Hashemi (Isfahan)Optimizing movement in convex and non-convex path-networks to establish connectivity.https://www.zbmath.org/1453.682042021-02-27T13:50:00+00:00"Das, Sandip"https://www.zbmath.org/authors/?q=ai:das.sandip"Nandy, Ayan"https://www.zbmath.org/authors/?q=ai:nandy.ayan"Sarvottamananda"https://www.zbmath.org/authors/?q=ai:sarvottamananda.swamiSummary: We solve a movement problem in which there are \(n\) sensors in path network in plane, where any sensor communicates only with its two immediate neighbors and only at a given maximum communication distance \(\lambda\). Initially, some of the inter-sensor distances may be more than \(\lambda\). We need to move sensors so that each sensor is in the communication range of its two neighbors, keeping the path topology intact. The problem is to minimize the maximum movement of any sensor. We present an \(O(n^3)\)-time algorithm to compute the new positions of sensors to establish transmission connectivity, called \(\lambda\)-connectivity in the paper, in a convex path-network which minimizes the maximum movement among the sensors. We also generalize our algorithm for ring, non-convex path, tethered and heterogeneous networks.Querying relational event graphs using colored range searching data structures.https://www.zbmath.org/1453.681292021-02-27T13:50:00+00:00"Chanchary, Farah"https://www.zbmath.org/authors/?q=ai:chanchary.farah"Maheshwari, Anil"https://www.zbmath.org/authors/?q=ai:maheshwari.anil"Smid, Michiel"https://www.zbmath.org/authors/?q=ai:smid.michiel-h-mSummary: We present a general approach for analyzing structural parameters of a relational event graph within arbitrary query time intervals using colored range query data structures. Relational event graphs generally represent social network datasets, where each graph edge carries a timestamp. We provide data structures based on colored range searching to efficiently compute several graph parameters (e.g., density, neighborhood overlap, \(h\)-index).Variations of largest rectangle recognition amidst a bichromatic point set.https://www.zbmath.org/1453.681982021-02-27T13:50:00+00:00"Acharyya, Ankush"https://www.zbmath.org/authors/?q=ai:acharyya.ankush"De, Minati"https://www.zbmath.org/authors/?q=ai:de.minati"Nandy, Subhas C."https://www.zbmath.org/authors/?q=ai:nandy.subhas-chandra"Pandit, Supantha"https://www.zbmath.org/authors/?q=ai:pandit.supanthaSummary: Classical separability problem involving multi-color point sets is an important area of study in computational geometry. In this paper, we study different separability problems for bichromatic point set \(P = P_r \cup P_b\) in \(\mathbb{R}^2\) and \(\mathbb{R}^3\), where \(P_r\) and \(P_b\) represent a set of \(n\) red points and a set of \(m\) blue points respectively, and the objective is to compute a monochromatic object of a desired type and of maximum size. We propose in-place algorithms for computing (i) an arbitrarily oriented monochromatic rectangle of maximum size in \(\mathbb{R}^2\), and (ii) an axis-parallel monochromatic cuboid of maximum size in \(\mathbb{R}^3\). The time complexities of the algorithms for problems (i) and (ii) are \(O (m(m + n) (m \sqrt{n} + m \log m + n \log n))\) and \(O (m^3 \sqrt{n} + m^2 n \log n)\), respectively. As a prerequisite, we propose an in-place construction of the classic data structure the \(k\)-\(d\) tree, that was originally invented by \textit{J. L. Bentley} [Commun. ACM 18, 509--517 (1975; Zbl 0306.68061)]. Our in-place variant of the \(k\)-\(d\) tree for a set of \(n\) points in \(\mathbb{R}^k\) supports orthogonal range counting query using \(O(1)\) extra workspace, and with \(O (n^{1 - 1 \slash k})\) query time complexity. The construction time of this data structure is \(O ( n \log n )\). Both the construction and query algorithms are non-recursive in nature that do not need \(O ( \log n )\) size recursion stack compared to the previously known construction algorithm for in-place \(k\)-\(d\) tree and query in it. We believe that this result is of independent interest. We also propose an algorithm for the problem of computing an arbitrarily oriented rectangle of maximum weight among a point set \(P = P_r \cup P_b\), where each point in \(P_b\) (resp. \( P_r)\) is associated with a negative (resp. positive) real-valued weight that runs in \(O (m^2 (n + m) \log (n + m))\) time using \(O (n)\) extra space.Breaking symmetries to rescue sum of squares in the case of makespan scheduling.https://www.zbmath.org/1453.900792021-02-27T13:50:00+00:00"Verdugo, Victor"https://www.zbmath.org/authors/?q=ai:verdugo.victor"Verschae, José"https://www.zbmath.org/authors/?q=ai:verschae.jose"Wiese, Andreas"https://www.zbmath.org/authors/?q=ai:wiese.andreasSummary: The sum of squares (SoS) hierarchy gives an automatized technique to create a family of increasingly tight convex relaxations for binary programs. There are several problems for which a constant number of rounds of this hierarchy give integrality gaps matching the best known approximation algorithms. For many other problems, however, ad-hoc techniques give better approximation ratios than SoS in the worst case, as shown by corresponding lower bound instances. Notably, in many cases these instances are invariant under the action of a large permutation group. This yields the question how symmetries in a formulation degrade the performance of the relaxation obtained by the SoS hierarchy. In this paper, we study this for the case of the minimum makespan problem on identical machines. Our first result is to show that \(\Omega (n)\) rounds of SoS applied over the configuration linear program yields an integrality gap of at least 1.0009, where \(n\) is the number of jobs. This improves on the recent work by \textit{A. Kurpisz} et al. [ibid. 172, No. 1--2 (B), 231--248 (2018; Zbl 1402.90055)] that shows an analogous result for the weaker \(\text{LS}_+\) and SA hierarchies. Our result is based on tools from representation theory of symmetric groups. Then, we consider the weaker assignment linear program and add a well chosen set of symmetry breaking inequalities that removes a subset of the machine permutation symmetries. We show that applying \(2^{\tilde{O}(1/\varepsilon^2)}\) rounds of the SA hierarchy to this stronger linear program reduces the integrality gap to \(1 + \varepsilon\), which yields a linear programming based polynomial time approximation scheme. Our results suggest that for this classical problem, symmetries were the main barrier preventing the SoS/SA hierarchies to give relaxations of polynomial complexity with an integrality gap of \(1 + \varepsilon\). We leave as an open question whether this phenomenon occurs for other symmetric problems.Maximum margin partial label learning.https://www.zbmath.org/1453.681612021-02-27T13:50:00+00:00"Yu, Fei"https://www.zbmath.org/authors/?q=ai:yu.fei"Zhang, Min-Ling"https://www.zbmath.org/authors/?q=ai:zhang.minlingSummary: Partial label learning aims to learn from training examples each associated with a set of \textit{candidate} labels, among which only one label is valid for the training example. The basic strategy to learn from partial label examples is disambiguation, i.e. by trying to recover the ground-truth labeling information from the candidate label set. As one of the popular machine learning paradigms, maximum margin techniques have been employed to solve the partial label learning problem. Existing attempts perform disambiguation by optimizing the margin between the maximum modeling output from candidate labels and that from non-candidate ones. Nonetheless, this formulation ignores considering the margin between the ground-truth label and other candidate labels. In this paper, a new maximum margin formulation for partial label learning is proposed which directly optimizes the margin between the ground-truth label and all other labels. Specifically, the predictive model is learned via an alternating
optimization procedure which coordinates the task of \textit{ground-truth label identification} and \textit{margin maximization} iteratively. Extensive experiments on artificial as well as real-world datasets show that the proposed approach is highly competitive to other well-established partial label learning approaches.Numerical approximation of multiple isolated roots of analytical systems.https://www.zbmath.org/1453.651072021-02-27T13:50:00+00:00"Giusti, Marc"https://www.zbmath.org/authors/?q=ai:giusti.marc"Yakoubsohn, Jean-Claude"https://www.zbmath.org/authors/?q=ai:yakoubsohn.jean-claudeSummary: The approximation of a multiple isolated root is a difficult problem. In fact the root can even be a repulsive root for a fixed point method like the Newton method. However there exists a huge literature on this topic but the answers given are not satisfactory. Numerical methods allowing a local convergence analysis work often under specific hypotheses. This viewpoint favouring numerical analysis forgets the geometry and the structure of the local algebra. Thus appeared so-called symbolic-numeric methods, yet full of lessons, but their precise numerical analysis is still missing. We propose in this paper a method of symbolic-numeric kind, whose numerical treatment is certified. The general idea is to construct a finite sequence of systems, admitting the same root, and called the \textit{deflation sequence}, so that the multiplicity of the root drops strictly between two successive systems. So the root becomes regular. Then we can extract a regular square we call \textit{deflated system}. We described already the construction of this deflated sequence when the singular root is known. The originality of this paper consists on one hand to construct a deflation sequence from a point close to the root and on the other hand to give a numerical analysis of this method. Analytic square integrable functions build the functional frame. Using the Bergman kernel, reproducing kernel of this functional frame, we are able to give a \textit{\(\alpha\)-theory à la Smale}. Furthermore we present new results on the determinacy of the numerical rank of a matrix and the closeness to zero of the evaluation map. As an important consequence we give an algorithm computing a deflation sequence \textit{free of \(\varepsilon\)}, threshold quantity measuring the numerical approximation, meaning that the entry of this algorithm does not involve the variable \(\varepsilon\).A constructive proof of a concentration bound for real-valued random variables.https://www.zbmath.org/1453.600602021-02-27T13:50:00+00:00"Mulzer, Wolfgang"https://www.zbmath.org/authors/?q=ai:mulzer.wolfgang-johann-heinrich"Shenkman, Natalia"https://www.zbmath.org/authors/?q=ai:shenkman.nataliaSummary: Almost 10 years ago,
\textit{R. Impagliazzo} and \textit{V. Kabanets} [Lect. Notes Comput. Sci. 6302, 617--631 (2010; Zbl 1305.68331)] gave a new combinatorial proof of Chernoff's bound for sums of bounded independent random variables. Unlike previous methods, their proof is \textit{constructive}. This means that it provides an efficient randomized algorithm for the following task: given a set of Boolean random variables whose sum is not concentrated around its expectation, find a subset of statistically dependent variables. However, the algorithm of Impagliazzo and Kabanets is given only for the Boolean case. On the other hand, the general proof technique works also for real-valued random variables, even though for this case, Impagliazzo and Kabanets obtain a concentration bound that is slightly suboptimal.
Herein, we revisit both these issues and show that in fact it is relatively easy to extend the Impagliazzo-Kabanets algorithm to real-valued random variables and to improve the corresponding concentration bound by a constant factor.\(\ell_1\)-sparsity approximation bounds for packing integer programs.https://www.zbmath.org/1453.901022021-02-27T13:50:00+00:00"Chekuri, Chandra"https://www.zbmath.org/authors/?q=ai:chekuri.chandra-s"Quanrud, Kent"https://www.zbmath.org/authors/?q=ai:quanrud.kent"Torres, Manuel R."https://www.zbmath.org/authors/?q=ai:torres.manuel-rSummary: We consider approximation algorithms for packing integer programs (PIPs) of the form \(\max \{\langle c, x\rangle : Ax \le b, x \in \{0, 1\}^n\}\) where \(A, b\) and \(c\) are nonnegative. We let \(W = \min_{i, j} b_i / A_{i, j}\) denote the width of \(A\) which is at least 1. Previous work by \textit{N. Bansal} et al. [Theory Comput. 8, Paper No. 24, 533--565 (2012; Zbl 1297.68259)] obtained an \(\Omega (\frac{1}{\Delta_0^{1/\lfloor W \rfloor }})\)-approximation ratio where \(\Delta_0\) is the maximum number of nonzeroes in any column of \(A\) (in other words the \(\ell_0\)-column sparsity of \(A)\). They raised the question of obtaining approximation ratios based on the \(\ell_1\)-column sparsity of \(A\) (denoted by \(\Delta_1)\) which can be much smaller than \(\Delta_0\). Motivated by recent work on covering integer programs [the first two authors, in: Proceedings of the 30th annual ACM-SIAM symposium on discrete algorithms, SODA 2019. Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM); New York, NY: Association for Computing Machinery (ACM). 1596--1615 (2019; Zbl 1431.68149); \textit{A. Chen} et al., in: Proceedings of the 27th annual ACM-SIAM symposium on discrete algorithms, SODA 2016. Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM); New York, NY: Association for Computing Machinery (ACM). 1984--2003 (2016; Zbl 1423.90147)] we show that simple algorithms based on randomized rounding followed by alteration, similar to those of Bansal et al. [loc. cit.] (but with a twist), yield approximation ratios for PIPs based on \(\Delta_1\). First, following an integrality gap example from Bansal et al. [loc. cit.], we observe that the case of \(W = 1\) is as hard as maximum independent set even when \(\Delta_1 \le 2\). In sharp contrast to this negative result, as soon as width is strictly larger than one, we obtain positive results via the natural LP relaxation. For PIPs with width \(W = 1 + \epsilon\) where \(\epsilon \in (0, 1]\), we obtain an \(\varOmega (\epsilon^2/\varDelta_1)\)-approximation. In the large width regime, when \(W \ge 2\), we obtain an \(\Omega ((\frac{1}{1 + \Delta_1/W})^{1/(W - 1)})\)-approximation. We also obtain a \((1 - \epsilon)\)-approximation when \(W = \Omega (\frac{\log (\Delta_1/\epsilon)}{\epsilon^2})\). Viewing the rounding algorithms as contention resolution schemes, we obtain approximation algorithms in the more general setting when the objective is a non-negative submodular function.Online submodular maximization: beating 1/2 made simple.https://www.zbmath.org/1453.682152021-02-27T13:50:00+00:00"Buchbinder, Niv"https://www.zbmath.org/authors/?q=ai:buchbinder.niv"Feldman, Moran"https://www.zbmath.org/authors/?q=ai:feldman.moran"Filmus, Yuval"https://www.zbmath.org/authors/?q=ai:filmus.yuval"Garg, Mohit"https://www.zbmath.org/authors/?q=ai:garg.mohitSummary: The Submodular Welfare Maximization problem (SWM) captures an important subclass of combinatorial auctions and has been studied extensively. In particular, it has been studied in a natural online setting in which items arrive one-by-one and should be allocated irrevocably upon arrival. For this setting,
\textit{N. Korula} et al. [SIAM J. Comput. 47, No. 3, 1056--1086 (2018; Zbl 1397.91265)]
were able to show that the greedy algorithm is 0.5052-competitive when the items arrive in a uniformly random order. Unfortunately, however, their proof is very long and involved. In this work, we present an (arguably) much simpler analysis of the same algorithm that provides a slightly better guarantee of 0.5096-competitiveness. Moreover, this analysis applies also to a generalization of online SWM in which the sets defining a (simple) partition matroid arrive online in a uniformly random order, and we would like to maximize a monotone submodular function subject to this matroid. Furthermore, for this more general problem, we prove an upper bound of 0.574 on the competitive ratio of the greedy algorithm, ruling out the possibility that the competitiveness of this natural algorithm matches the optimal offline approximation ratio of \(1 - 1/e\).Higher-order Fourier analysis and applications.https://www.zbmath.org/1453.680022021-02-27T13:50:00+00:00"Hatami, Hamed"https://www.zbmath.org/authors/?q=ai:hatami.hamed"Hatami, Pooya"https://www.zbmath.org/authors/?q=ai:hatami.pooya"Lovett, Shachar"https://www.zbmath.org/authors/?q=ai:lovett.shacharSummary: Fourier analysis has been extremely useful in many areas of mathematics. In the last several decades, it has been used extensively in theoretical computer science. Higher-order Fourier analysis is an extension of the classical Fourier analysis, where one allows to generalize the ``linear phases'' to higher degree polynomials. It has emerged from the seminal proof of Gowers of Szemerédi's theorem with improved quantitative bounds, and has been developed since, chiefly by the number theory community. In parallel, it has found applications also in theoretical computer science, mostly in algebraic property testing, coding theory and complexity theory.
The purpose of this book is to lay the foundations of higher-order Fourier analysis, aimed towards applications in theoretical computer science with a focus on algebraic property testing.Formal concept analysis with constraints by EM operators.https://www.zbmath.org/1453.681792021-02-27T13:50:00+00:00"Mao, Hua"https://www.zbmath.org/authors/?q=ai:mao.huaSummary: Formal concept analysis is a method of exploratory data analysis that aims at the extraction of natural cluster from object-attribute
data tables. We present a way to add user's background knowledge to formal concept analysis. The type of background knowledge we deal
with relates to relative importance of attributes in the input data.
We introduce EM operators which constrain in attributes of formal concept analysis. The main aim is to make extraction of concepts from the input data more focused by taking into account the background knowledge. Particularly, only concepts which are compatible with the constraint are extracted from data. Therefore, the number of extracted concepts becomes smaller since we leave out non-interesting concepts. We concentrate on foundational aspects such as mathematical feasibility and computational tractability.Leveraging Bayesian analysis to improve accuracy of approximate models.https://www.zbmath.org/1453.624172021-02-27T13:50:00+00:00"Nadiga, Balasubramanya"https://www.zbmath.org/authors/?q=ai:nadiga.balasubramanya-t"Jiang, Chiyu"https://www.zbmath.org/authors/?q=ai:jiang.chiyu"Livescu, Daniel"https://www.zbmath.org/authors/?q=ai:livescu.danielSummary: We focus on improving the accuracy of an approximate model of a multiscale dynamical system that uses a set of parameter-dependent terms to account for the effects of unresolved or neglected dynamics on resolved scales. We start by considering various methods of calibrating and analyzing such a model given \textit{a few} well-resolved simulations. After presenting results for various point estimates and discussing some of their shortcomings, we demonstrate (a) the potential of hierarchical Bayesian analysis to uncover previously unanticipated physical dependencies in the approximate model, and (b) how such insights can then be used to improve the model. In effect parametric dependencies found from the Bayesian analysis are used to improve structural aspects of the model. While we choose to illustrate the procedure in the context of a closure model for buoyancy-driven, variable-density turbulence, the statistical nature of the approach makes it more generally applicable. Towards addressing issues of increased computational cost associated with the procedure, we demonstrate the use of a neural network based surrogate in accelerating the posterior sampling process and point to recent developments in variational inference as an alternative methodology for greatly mitigating such costs. We conclude by suggesting that modern validation and uncertainty quantification techniques such as the ones we consider have a valuable role to play in the development and improvement of approximate models.Quasi-automatic groups are asynchronously automatic.https://www.zbmath.org/1453.200832021-02-27T13:50:00+00:00"Blanchette, Benjamin"https://www.zbmath.org/authors/?q=ai:blanchette.benjaminSummary: A quasi-automatic semigroup is a finitely generated semigroup with a rational set of representatives such that the graph of right multiplication by any generator is a rational relation. A asynchronously automatic semigroup is a quasi-automatic semigroup for which these rational relations are also recognisable by two-tape automata. We show that when such a semigroup happens to be a group, the converse actually holds, meaning quasi-automatic groups are asynchronously automatic.Nivat's conjecture holds for sums of two periodic configurations.https://www.zbmath.org/1453.050052021-02-27T13:50:00+00:00"Szabados, Michal"https://www.zbmath.org/authors/?q=ai:szabados.michalSummary: Nivat's conjecture [\textit{M. Nivat}, invited talk at: Automata, languages and programming. 24th international colloquium, ICALP'97, Bologna, Italy, July 7--11 (1997)] is a long-standing open combinatorial problem. It concerns two-dimensional configurations, that is, maps \(\mathbb{Z}^2 \rightarrow \mathcal{A}\) where \(\mathcal{A}\) is a finite set of symbols. Such configurations are often understood as colorings of a two-dimensional square grid. Let \(P_c(m,n)\) denote the number of distinct \(m\times n\) block patterns occurring in a configuration \(c\). Configurations satisfying \(P_c(m,n)\leq mn\) for some \(m,n\in\mathbb{N}\) are said to have low rectangular complexity. Nivat conjectured that such configurations are necessarily periodic.
\textit{J. Kari} and the author [Inf. Comput. 271, Article ID 104481, 25 p. (2020; Zbl 1434.68266)] showed that low complexity configurations can be decomposed into a sum of periodic configurations. In this paper we show that if there are at most two components, Nivat's conjecture holds. As a corollary we obtain an alternative proof of a result of \textit{V. Cyr} and \textit{B. Kra} [Eur. J. Comb. 52, Part A, 146--173 (2016; Zbl 1327.05004)]: If there exist \(m,n\in\mathbb{N}\) such that \(P_c(m,n)\leq mn/2\), then \(c\) is periodic. The technique used in this paper combines the algebraic approach of Kari and the author [loc. cit.] with balanced sets of Cyr and Kra [loc. cit.].
For the entire collection see [Zbl 1409.68026].Just interpolate: kernel ``ridgeless'' regression can generalize.https://www.zbmath.org/1453.681552021-02-27T13:50:00+00:00"Liang, Tengyuan"https://www.zbmath.org/authors/?q=ai:liang.tengyuan"Rakhlin, Alexander"https://www.zbmath.org/authors/?q=ai:rakhlin.alexanderSummary: In the absence of explicit regularization, Kernel ``Ridgeless'' Regression with nonlinear kernels has the potential to fit the training data perfectly. It has been observed empirically, however, that such interpolated solutions can still generalize well on test data. We isolate a phenomenon of implicit regularization for minimum-norm interpolated solutions which is due to a combination of high dimensionality of the input data, curvature of the kernel function and favorable geometric properties of the data such as an eigenvalue decay of the empirical covariance and kernel matrices. In addition to deriving a data-dependent upper bound on the out-of-sample error, we present experimental evidence suggesting that the phenomenon occurs in the MNIST dataset.Topological methods in data analysis and visualization V. Theory, algorithms, and applications. Selected papers based on the presentations at the TopoInVis workshop, Tokyo, Japan, February 27--28, 2017.https://www.zbmath.org/1453.680122021-02-27T13:50:00+00:00"Carr, Hamish (ed.)"https://www.zbmath.org/authors/?q=ai:carr.hamish"Fujishiro, Issei (ed.)"https://www.zbmath.org/authors/?q=ai:fujishiro.issei"Sadlo, Filip (ed.)"https://www.zbmath.org/authors/?q=ai:sadlo.filip"Takahashi, Shigeo (ed.)"https://www.zbmath.org/authors/?q=ai:takahashi.shigeoPublisher's description: This collection of peer-reviewed workshop papers provides comprehensive coverage of cutting-edge research into topological approaches to data analysis and visualization. It encompasses the full range of new algorithms and insights, including fast homology computation, comparative analysis of simplification techniques, and key applications in materials and medical science. The book also addresses core research challenges such as the representation of large and complex datasets, and integrating numerical methods with robust combinatorial algorithms. In keeping with the focus of the TopoInVis 2017 Workshop, the contributions reflect the latest advances in finding experimental solutions to open problems in the sector. They provide an essential snapshot of state-of-the-art research, helping researchers to keep abreast of the latest developments and providing a basis for future work. Gathering papers by some of the world's leading experts on topological techniques, the book represents a valuable contribution to a field of growing importance, with applications in disciplines ranging from engineering to medicine.
The articles of mathematical interest will be reviewed individually. For the preceding workshop see [Zbl 1369.68015].Optimizing movement in convex and non-convex path-networks to establish connectivity.https://www.zbmath.org/1453.682052021-02-27T13:50:00+00:00"Das, Sandip"https://www.zbmath.org/authors/?q=ai:das.sandip"Nandy, Ayan"https://www.zbmath.org/authors/?q=ai:nandy.ayan"Sarvottamananda, Swami"https://www.zbmath.org/authors/?q=ai:sarvottamananda.swamiSummary: We solve a min-max movement problem in which there are \(n\) sensors in path network in plane, where any sensor communicates only with its two immediate neighbors and only at a given maximum communication distance \(\lambda\). We need to move sensors so that each sensor is in the communication range of its two neighbors, keeping the path topology intact. We present an \(O(n^3)\) algorithm for min-max movement problem in a convex path-network which minimizes the maximum movement among the sensors. We also generalize our algorithm for ring, non-convex path, tethered and heterogeneous networks.
For the entire collection see [Zbl 1355.68015].A cryptographer's Conspiracy Santa.https://www.zbmath.org/1453.680802021-02-27T13:50:00+00:00"Bultel, Xavier"https://www.zbmath.org/authors/?q=ai:bultel.xavier"Dreier, Jannik"https://www.zbmath.org/authors/?q=ai:dreier.jannik"Dumas, Jean-Guillaume"https://www.zbmath.org/authors/?q=ai:dumas.jean-guillaume"Lafourcade, Pascal"https://www.zbmath.org/authors/?q=ai:lafourcade.pascalSummary: In Conspiracy Santa, a variant of Secret Santa, a group of people offer each other Christmas gifts, where each member of the group receives a gift from the other members of the group. To that end, the members of the group form conspiracies, to decide on appropriate gifts, and usually divide the cost of each gift among all participants of that conspiracy. This requires to settle the shared expenses per conspiracy, so Conspiracy Santa can actually be seen as an aggregation of several shared expenses problems.
First, we show that the problem of finding a minimal number of transaction when settling shared expenses is NP-complete. Still, there exists good greedy approximations. Second, we present a greedy distributed secure solution to Conspiracy Santa. This solution allows a group of people to share the expenses for the gifts in such a way that no participant learns the price of his gift, but at the same time notably reduces the number of transactions with respect to a naive aggregation. Furthermore, our solution does not require a trusted third party, and can either be implemented physically (the participants are in the same room and exchange money using envelopes) or, virtually, using a cryptocurrency.
For the entire collection see [Zbl 1390.68022].On the exact complexity of polyomino packing.https://www.zbmath.org/1453.680912021-02-27T13:50:00+00:00"Bodlaender, Hans L."https://www.zbmath.org/authors/?q=ai:bodlaender.hans-l"van der Zanden, Tom C."https://www.zbmath.org/authors/?q=ai:van-der-zanden.tom-cSummary: We show that the problem of deciding whether a collection of polyominoes, each fitting in a \(2\times O(\log n)\) rectangle, can be packed into a \(3\times n\) box does not admit a \(2^{o(n/\log n)}\)-time algorithm, unless the Exponential Time Hypothesis fails. We also give an algorithm that attains this lower bound, solving any instance of polyomino packing with total area \(n\) in \(2^{O(n/\log n)}\) time. This establishes a tight bound on the complexity of Polyomino Packing, even in a very restricted case. In contrast, for a \(2\times n\) box, we show that the problem can be solved in strongly subexponential time.
For the entire collection see [Zbl 1390.68022].Data compression in big graph warehouse.https://www.zbmath.org/1453.680742021-02-27T13:50:00+00:00"Polyakov, I. V."https://www.zbmath.org/authors/?q=ai:polyakov.i-v"Chepovskiy, A. A."https://www.zbmath.org/authors/?q=ai:chepovskii.a-a"Chepovskiy, A. M."https://www.zbmath.org/authors/?q=ai:chepovskii.a-mSummary: In this paper, we propose an approach for compact storage of big graphs. We propose preprocessing algorithms for graphs of a certain type, which can significantly increase the data density on the disk and increase performance for basic operations with graphs.Tracks from hell -- when finding a proof may be easier than checking it.https://www.zbmath.org/1453.680892021-02-27T13:50:00+00:00"Almanza, Matteo"https://www.zbmath.org/authors/?q=ai:almanza.matteo"Leucci, Stefano"https://www.zbmath.org/authors/?q=ai:leucci.stefano"Panconesi, Alessandro"https://www.zbmath.org/authors/?q=ai:panconesi.alessandroSummary: We consider the popular smartphone game Trainyard: a puzzle game that requires the player to lay down tracks in order to route colored trains from departure stations to suitable arrival stations. While it is already known
[the authors, LIPIcs -- Leibniz Int. Proc. Inform. 49, Article 2, 14~p. (2016; Zbl 1369.68225); Theor. Comput. Sci. 748, 66--76 (2018; Zbl 1402.68078)]
that the problem of finding a solution to a given Trainyard instance (i.e., game level) is NP-hard, determining the computational complexity of checking whether a candidate solution (i.e., a track layout) solves the level was left as an open problem. In this paper we prove that this verification problem is PSPACE-complete, thus implying that Trainyard players might not only have a hard time finding solutions to a given level, but they might even be unable to efficiently recognize them.
For the entire collection see [Zbl 1390.68022].Who witnesses The Witness? Finding witnesses in The Witness is hard and sometimes impossible.https://www.zbmath.org/1453.680882021-02-27T13:50:00+00:00"Abel, Zachary"https://www.zbmath.org/authors/?q=ai:abel.zachary-r"Bosboom, Jeffrey"https://www.zbmath.org/authors/?q=ai:bosboom.jeffrey"Demaine, Erik D."https://www.zbmath.org/authors/?q=ai:demaine.erik-d"Hamilton, Linus"https://www.zbmath.org/authors/?q=ai:hamilton.linus"Hesterberg, Adam"https://www.zbmath.org/authors/?q=ai:hesterberg.adam"Kopinsky, Justin"https://www.zbmath.org/authors/?q=ai:kopinsky.justin"Lynch, Jayson"https://www.zbmath.org/authors/?q=ai:lynch.jayson"Rudoy, Mikhail"https://www.zbmath.org/authors/?q=ai:rudoy.mikhailSummary: We analyze the computational complexity of the many types of pencil-and-paper-style puzzles featured in the 2016 puzzle video game The Witness. In all puzzles, the goal is to draw a path in a rectangular grid graph from a start vertex to a destination vertex. The different puzzle types place different constraints on the path: preventing some edges from being visited (broken edges); forcing some edges or vertices to be visited (hexagons); forcing some cells to have certain numbers of incident path edges (triangles); or forcing the regions formed by the path to be partially monochromatic (squares), have exactly two special cells (stars), or be singly covered by given shapes (polyominoes) and/or negatively counting shapes (antipolyominoes). We show that any one of these clue types (except the first) is enough to make path finding NP-complete (``witnesses exist but are hard to find''), even for rectangular boards. Furthermore, we show that a final clue type (antibody), which necessarily ``cancels'' the effect of another clue in the same region, makes path finding \(\Sigma_2\)-complete (``witnesses do not exist''), even with a single antibody (combined with many anti/polyominoes), and the problem gets no harder with many antibodies.
For the entire collection see [Zbl 1390.68022].Two kinds of classifications based on improved gravitational search algorithm and particle swarm optimization algorithm.https://www.zbmath.org/1453.901992021-02-27T13:50:00+00:00"Hu, Hongping"https://www.zbmath.org/authors/?q=ai:hu.hongping"Cui, Xiaxia"https://www.zbmath.org/authors/?q=ai:cui.xiaxia"Bai, Yanping"https://www.zbmath.org/authors/?q=ai:bai.yanpingSummary: Gravitational Search Algorithm (GSA) is a widely used metaheuristic algorithm. Although fewer parameters in GSA were adjusted, GSA has a slow convergence rate. In this paper, we change the constant acceleration coefficients to be the exponential function on the basis of combination of GSA and PSO (PSO-GSA) and propose an improved PSO-GSA algorithm (written as I-PSO-GSA) for solving two kinds of classifications: surface water quality and the moving direction of robots. I-PSO-GSA is employed to optimize weights and biases of backpropagation (BP) neural network. The experimental results show that, being compared with combination of PSO and GSA (PSO-GSA), single PSO, and single GSA for optimizing the parameters of BP neural network, I-PSO-GSA outperforms PSO-GSA, PSO, and GSA and has better classification accuracy for these two actual problems.Prefix and suffix reversals on strings.https://www.zbmath.org/1453.682282021-02-27T13:50:00+00:00"Fertin, Guillaume"https://www.zbmath.org/authors/?q=ai:fertin.guillaume"Jankowiak, Loïc"https://www.zbmath.org/authors/?q=ai:jankowiak.loic"Jean, Géraldine"https://www.zbmath.org/authors/?q=ai:jean.geraldineSummary: The \textsc{Sorting by Prefix Reversals} problem consists in sorting the elements of a given permutation \(\pi\) using a minimum number of prefix reversals, i.e. reversals that always involve the leftmost element of \(\pi\). A natural extension of this problem is to consider strings, in which any letter may appear several times, rather than permutations. In strings, three different types of problems arise: \textit{grouping} (given a string \(S\), transform it so that all identical letters are consecutive), \textit{sorting} (a constrained version of grouping, in which the target string must be lexicographically ordered) and \textit{rearranging} (given two strings \(S\) and \(T\), transform \(S\) into \(T\)). In this paper, we study these three problems, under an algorithmic viewpoint, in the setting where two operations, rather than one, are allowed: namely, \textit{prefix and suffix} reversals -- where a suffix reversal must always involve the rightmost element of the string. We first compare the ``prefix reversals only'' case to our case, before presenting a series of algorithmic results on these three problems concerning polynomiality, constant ratio approximation algorithms, NP-hardness and fixed-parameterized tractability. These results depend on the size \(k\) of the alphabet on which the strings are built, with a particular focus on small-sized alphabet instances (i.e., \(k = O(1)\)) and big-sized alphabet instances (i.e. \(n - k = O(1)\), where \(n\) is the length of the input string(s)).Remote IP protection using timing channels.https://www.zbmath.org/1453.680252021-02-27T13:50:00+00:00"Donda, Ariano-Tim"https://www.zbmath.org/authors/?q=ai:donda.ariano-tim"Samarin, Peter"https://www.zbmath.org/authors/?q=ai:samarin.peter"Samotyja, Jacek"https://www.zbmath.org/authors/?q=ai:samotyja.jacek"Lemke-Rust, Kerstin"https://www.zbmath.org/authors/?q=ai:lemke-rust.kerstin"Paar, Christof"https://www.zbmath.org/authors/?q=ai:paar.christofSummary: We introduce the use of timing channels for digital watermarking of embedded hardware and software components. In addition to previous side channel watermarking schemes, timing analysis offers new perspectives for a remote verification of mobile and embedded products. Timing channels make it possible to detect the presence of a watermark solely by measuring program execution times.{
}We propose schemes for embedding authorship and fingerprint marks that are built upon conditional timing delays. We provide experimental evidence by protecting an implementation of an image binarization circuit on an FPGA board that is connected over Ethernet to a remote PC. The circuit constantly leaks the watermark over the timing channel by modulating its execution time, which is successfully detected by using an oscilloscope and an EM probe, as well as by using software on a remote PC. Our solution for a remote verification is of special interest for highly performant services as they force an adaptive
adversary towards enhanced costs in time, memory, and circuitry when bypassing these schemes.
For the entire collection see [Zbl 1318.68031].Securely solving classical network flow problems.https://www.zbmath.org/1453.680792021-02-27T13:50:00+00:00"Aly, Abdelrahaman"https://www.zbmath.org/authors/?q=ai:aly.abdelrahaman"Van Vyve, Mathieu"https://www.zbmath.org/authors/?q=ai:van-vyve.mathieuSummary: We investigate how to solve several classical network flow problems using secure multi-party computation. We consider the shortest path problem, the minimum mean cycle problem and the minimum cost flow problem. To the best of our knowledge, this is the first time the two last problems have been addressed in a general multi-party computation setting. Furthermore, our study highlights the complexity gaps between traditional and secure implementations of the solutions, to later test its implementation. It also explores various trade-offs between performance and security. Additionally, it provides protocols that can be used as building blocks to solve complex problems. Applications of our work can be found in: communication networks, routing data from rival company hubs; distribution problems, retailer/supplier selection in multi-level supply chains that want to share routes without disclosing sensible information; amongst others.
For the entire collection see [Zbl 1318.68031].A clustering approach for privacy-preserving in social networks.https://www.zbmath.org/1453.680732021-02-27T13:50:00+00:00"Wang, Rong"https://www.zbmath.org/authors/?q=ai:wang.rong"Zhang, Min"https://www.zbmath.org/authors/?q=ai:zhang.min.4|zhang.min.3|zhang.min.7|zhang.min.5|zhang.min.6|zhang.min.1|zhang.min.2"Feng, Dengguo"https://www.zbmath.org/authors/?q=ai:feng.dengguo"Fu, Yanyan"https://www.zbmath.org/authors/?q=ai:fu.yanyanSummary: Social networks, in which huge numbers of people spread massive information, are developing quite rapidly. Here, people can obtain interesting information much more quickly and conveniently. However, people's privacies leak easily here too. A lot of works have been done to deal with this problem. Most of them focused on either attribute information or structure information. It is insufficient, because both attributes and structures, including sensitive attributes, are important in social networks, and we need to protect both of them. In this paper, we introduce a novel approach for privacy-preserving considering both attribute and structure information. In particular, sensitive attributes are considered to resist re-identification attacks. Moreover, we define the entropy to measure capability of preserving sensitive attributes.
For the entire collection see [Zbl 1318.68031].Convergence of Markovian stochastic approximation for Markov random fields with hidden variables.https://www.zbmath.org/1453.626192021-02-27T13:50:00+00:00"Qi, Anna"https://www.zbmath.org/authors/?q=ai:qi.anna"Yang, Lihua"https://www.zbmath.org/authors/?q=ai:yang.lihua"Huang, Chao"https://www.zbmath.org/authors/?q=ai:huang.chaoFormal modeling of random oracle programmability and verification of signature unforgeability using task-PIOAs.https://www.zbmath.org/1453.680262021-02-27T13:50:00+00:00"Yoneyama, Kazuki"https://www.zbmath.org/authors/?q=ai:yoneyama.kazukiSummary: The task-structured Probabilistic I/O Automata (task-PIOA) framework provides a method to formulate and to prove the computationally-bounded security of non-sequential processing systems in a formal way. Though existing works show security analyses of some classic cryptographic protocols (e.g., the EGL oblivious transfer) against simple adversaries (e.g., honest but curious adversary), there is no case study for fundamental cryptographic primitives (e.g., encryption and signature) against sufficiently strong adversaries (e.g., IND-CCA for encryption and EUF-CMA for signature). In this paper, we propose a formulation of signature against EUF-CMA in the task-PIOA framework. Using the task-PIOA framework allows us to verify security of signature schemes in the non-sequential scheduling manner. We show the validity and usefulness of our formulation by giving a formal security analysis of the FDH signature scheme. In order to prove the security, we also introduce a method to utilize the power of
random oracles. As far as we know, this work is the first case study to clarify usefulness of random oracles in this framework.
For the entire collection see [Zbl 1318.68031].How selfish individuals achieve unselfish goals: majority-based progressive control of discrete event systems.https://www.zbmath.org/1453.931452021-02-27T13:50:00+00:00"Park, Seong-Jin"https://www.zbmath.org/authors/?q=ai:park.seong-jin"Yang, Jung-Min"https://www.zbmath.org/authors/?q=ai:yang.jung-minSummary: We present \textit{majority-based progressive control} in which local supervisors have not only their own private specifications as primary goals, but an additional global specification representing a desirable behavior of the overall system. The control policy follows the majority rule, and the ultimate goal is to achieve a progressive closed-loop behavior whereby the number of local supervisors meeting private specifications increases as the controlled system evolves. For this purpose, we present the notion of majority-controllability of a global specification, and show that it is a crucial condition for the existence of local supervisors achieving a progressive global specification. Using the presented control theory, we analyze how the federal estate tax in the Unites States was repealed in 2010.Bound on the diameter of metacyclic groups.https://www.zbmath.org/1453.050482021-02-27T13:50:00+00:00"Rajeevsarathy, Kashyap"https://www.zbmath.org/authors/?q=ai:rajeevsarathy.kashyap"Sarkar, Siddhartha"https://www.zbmath.org/authors/?q=ai:sarkar.siddharthaAn approach integrating planning and image-based visual servo control for road following and moving obstacles avoidance.https://www.zbmath.org/1453.931652021-02-27T13:50:00+00:00"Reyes, Ramses"https://www.zbmath.org/authors/?q=ai:reyes.ramses"Murrieta-Cid, Rafael"https://www.zbmath.org/authors/?q=ai:murrieta-cid.rafaelSummary: This paper proposes an approach that integrates planning and image based visual servo control for road following and moving obstacle avoidance. One main objective of this article is to represent a robot's general plan or strategy in the form of a finite state machine or automaton. This automaton is designed previously to execution of the plan and then it is used for any instance of the robot's task. The visual servo control is used to regulate the robot's velocity according to the visual target (task specification) which depends on the state in the automaton. All the algorithms and control laws have been implemented and simulation results and experiments with a real scale-size car-like robot are presented and discussed.The essence of Ruby.https://www.zbmath.org/1453.680312021-02-27T13:50:00+00:00"Ueno, Katsuhiro"https://www.zbmath.org/authors/?q=ai:ueno.katsuhiro"Fukasawa, Yutaka"https://www.zbmath.org/authors/?q=ai:fukasawa.yutaka"Morihata, Akimasa"https://www.zbmath.org/authors/?q=ai:morihata.akimasa"Ohori, Atsushi"https://www.zbmath.org/authors/?q=ai:ohori.atsushiSummary: Ruby is a dynamic, object-oriented language with advanced features such as \texttt{{yield}} operator and dynamic class manipulation. They make Ruby a popular, highly productive scripting language, but they also make the semantics of Ruby complicated and difficult to understand. Even the JIS/ISO standard of Ruby seems to contain some ambiguities. For Ruby to be established as a reliable scripting language, it should have a rigorous semantics. To meet this challenge, we present a formal operational semantics that can serve as a high-level specification for both the users and implementers. The key insight underlying the semantics is that various elaborate features of Ruby can be cleanly represented as a composition of two orthogonal calculi: one for objects and classes and the other for representing control. The presented semantics leads straightforwardly to a reference implementation. Initial evaluation of our implementation confirms that the presented semantics conforms to commonly accepted Ruby behavior.
For the entire collection see [Zbl 1327.68021].On the ubiquity of the Bayesian paradigm in statistical machine learning and data science.https://www.zbmath.org/1453.622692021-02-27T13:50:00+00:00"Fokoué, Ernest"https://www.zbmath.org/authors/?q=ai:fokoue.ernest-parfaitSummary: This paper seeks to provide a thorough account of the ubiquitous nature of the Bayesian paradigm in modern statistics, data science and artificial intelligence. Once maligned, on the one hand by those who philosophically hated the very idea of subjective probability used in prior specification, and on the other hand because of the intractability of the computations needed for Bayesian estimation and inference, the Bayesian school of thought now permeates and pervades virtually all areas of science, applied science, engineering, social science and even liberal arts, often in unsuspected ways. Thanks in part to the availability of powerful computing resources, but also to the literally unavoidable inherent presence of the quintessential building blocks of the Bayesian paradigm in all walks of life, the Bayesian way of handling statistical learning, estimation and inference is not only mainstream but also becoming the most central approach to learning from the data. This paper explores some of the most relevant elements to help to the reader appreciate the pervading power and presence of the Bayesian paradigm in statistics, artificial intelligence and data science, with an emphasis on how the Gospel according to Reverend Thomas Bayes has turned out to be the truly good news, and in some cases the amazing saving grace, for all who seek to learn statistically from the data.An empirical demonstration of the no free lunch theorem.https://www.zbmath.org/1453.622732021-02-27T13:50:00+00:00"Ogundepo, Ezekiel Adebayo"https://www.zbmath.org/authors/?q=ai:ogundepo.ezekiel-adebayo"Fokoué, Ernest"https://www.zbmath.org/authors/?q=ai:fokoue.ernest-parfaitSummary: In this paper, we provide a substantial empirical demonstration of the statistical machine learning result known as the No Free Lunch Theorem (NFLT). We specifically compare the predictive performances of a wide variety of machine learning algorithms/methods on a wide variety of qualitatively and quantitatively different datasets. Our research work conclusively demonstrates a great evidence in favor of the NFLT by using an overall ranking of methods and their corresponding learning machines, revealing in effect that none of the learning machines considered predictively outperforms all the other machines on all the widely different datasets analyzed. It is noteworthy however that while evidence from various datasets and methods support the NFLT somewhat emphatically, some learning machines like Random Forest, Adaptive Boosting, and Support Vector Machines (SVM) appear to emerge as methods with the overall tendency to yield predictive performances almost always among the best.Stabbing circles for sets of segments in the plane.https://www.zbmath.org/1453.682022021-02-27T13:50:00+00:00"Claverol, Mercè"https://www.zbmath.org/authors/?q=ai:claverol.merce"Khramtcova, Elena"https://www.zbmath.org/authors/?q=ai:khramtcova.elena"Papadopoulou, Evanthia"https://www.zbmath.org/authors/?q=ai:papadopoulou.evanthia"Saumell, Maria"https://www.zbmath.org/authors/?q=ai:saumell.maria"Seara, Carlos"https://www.zbmath.org/authors/?q=ai:seara.carlosSummary: Stabbing a set \(S\) of \(n\) segments in the plane by a line is a well-known problem. In this paper we consider the variation where the stabbing object is a circle instead of a line. We show that the problem is tightly connected to cluster Voronoi diagrams, in particular, the Hausdorff and the farthest-color Voronoi diagram. Based on these diagrams, we provide a method to compute all the combinatorially different stabbing circles for \(S\), and the stabbing circles with maximum and minimum radius. We give conditions under which our method is fast. These conditions are satisfied if the segments in \(S\) are parallel, resulting in a \(O(n \log ^2{n})\) time algorithm. We also observe that the stabbing circle problem for \(S\) can be solved in optimal \(O(n^2)\) time and space by reducing the problem to computing the stabbing planes for a set of segments in 3D.
For the entire collection see [Zbl 1333.68012].Multi-stage fault warning for large electric grids using anomaly detection and machine learning.https://www.zbmath.org/1453.625502021-02-27T13:50:00+00:00"Raja, Sanjeev"https://www.zbmath.org/authors/?q=ai:raja.sanjeev"Fokoué, Ernest"https://www.zbmath.org/authors/?q=ai:fokoue.ernest-parfaitSummary: In the monitoring of a complex electric grid, it is of paramount importance to provide operators with early warnings of anomalies detected on the network, along with a precise classification and diagnosis of the specific fault type. In this paper, we propose a novel multi-stage early warning system prototype for electric grid fault detection, classification, subgroup discovery, and visualization. In the first stage, a computationally efficient anomaly detection method based on quartiles detects the presence of a fault in real time. In the second stage, the fault is classified into one of nine pre-defined disaster scenarios. The time series data are first mapped to highly discriminative features by applying dimensionality reduction based on temporal autocorrelation. The features are then mapped through one of three classification techniques: support vector machine, random forest, and artificial neural network. Finally in the third stage, intra-class clustering based on dynamic time warping is used to characterize the fault with further granularity. Results on the Bonneville Power Administration electric grid data show that i) the proposed anomaly detector is both fast and accurate; ii) dimensionality reduction leads to dramatic improvement in classification accuracy and speed; (iii) the random forest method offers the most accurate, consistent, and robust fault classification; and iv) time series within a given class naturally separate into five distinct clusters which correspond closely to the geographical distribution of electric grid buses.Cathy Meadows: a central figure in protocol analysis.https://www.zbmath.org/1453.680072021-02-27T13:50:00+00:00"Pinsky, Sylvan"https://www.zbmath.org/authors/?q=ai:pinsky.sylvanSummary: This anecdotal note describes Cathy Meadow's leadership role in formal specification and verification of cryptographic protocols. Cathy has been a central figure in protocol analysis through her significant research and successful efforts to bring researchers together to form a unified, cohesive, and effective community to design and evaluate cryptographic protocols.
For the entire collection see [Zbl 1412.68009].On the versatility and polyvalence of certain statistical learning machines.https://www.zbmath.org/1453.622682021-02-27T13:50:00+00:00"Fokoué, Ernest"https://www.zbmath.org/authors/?q=ai:fokoue.ernest-parfait(no abstract)Book review of: M. Danesi, Pythagoras' legacy. Mathematics in ten great ideas.https://www.zbmath.org/1453.000062021-02-27T13:50:00+00:00"Crilly, Tony"https://www.zbmath.org/authors/?q=ai:crilly.tonyReview of [Zbl 1430.00006].A blackbox polynomial system solver on parallel shared memory computers.https://www.zbmath.org/1453.654692021-02-27T13:50:00+00:00"Verschelde, Jan"https://www.zbmath.org/authors/?q=ai:verschelde.janSummary: A numerical irreducible decomposition for a polynomial system provides representations for the irreducible factors of all positive dimensional solution sets of the system, separated from its isolated solutions. Homotopy continuation methods are applied to compute a numerical irreducible decomposition. Load balancing and pipelining are techniques in a parallel implementation on a computer with multicore processors. The application of the parallel algorithms is illustrated on solving the cyclic \(n\)-roots problems, in particular for \(n=8,9\), and \(12\).
For the entire collection see [Zbl 1396.68014].A theory and an algorithm for computing sparse multivariate polynomial remainder sequence.https://www.zbmath.org/1453.130852021-02-27T13:50:00+00:00"Sasaki, Tateaki"https://www.zbmath.org/authors/?q=ai:sasaki.tateakiSummary: This paper presents an algorithm for computing the polynomial remainder sequence (PRS) and corresponding cofactor sequences of sparse multivariate polynomials over a number field \({\mathbb K}\). Most conventional algorithms for computing PRSs are based on the pseudo remainder (Prem), and the celebrated subresultant theory for the PRS has been constructed on the Prem. The Prem is uneconomical for computing PRSs of sparse polynomials. Hence, in this paper, the concept of sparse pseudo remainder (spsPrem) is defined. No subresultant-like theory has been developed so far for the PRS based on spsPrem. Therefore, we develop a matrix theory for spsPrem-based PRSs. The computational formula for PRS, regardless of whether it is based on Prem or spsPrem, causes a considerable intermediate expression growth. Hence, we next propose a technique to suppress the expression growth largely. The technique utilizes the power-series arithmetic but no Hensel lifting. Simple experiments show that our technique suppresses the intermediate expression growth fairly well, if the sub-variable ordering is set suitably.
For the entire collection see [Zbl 1396.68014].Beyond the first class of analytic complexity.https://www.zbmath.org/1453.320062021-02-27T13:50:00+00:00"Sadykov, T. M."https://www.zbmath.org/authors/?q=ai:sadykov.t-mSummary: We investigate the notion of analytic complexity of a bivariate holomorphic function by means of computer algebra tools. An estimate from below on the number of terms in the differential polynomials defining classes of analytic complexity is established. We provide an algorithm which allows one to explicitly compute the differential membership criteria for certain families of bivariate analytic functions in the second complexity class. The presented algorithm is implemented in the computer algebra system Singular 4-1-1.
For the entire collection see [Zbl 1396.68014].Factoring multivariate polynomials with many factors and huge coefficients.https://www.zbmath.org/1453.130722021-02-27T13:50:00+00:00"Monagan, Michael"https://www.zbmath.org/authors/?q=ai:monagan.michael-b"Tuncer, Baris"https://www.zbmath.org/authors/?q=ai:tuncer.barisSummary: The standard approach to factor a multivariate polynomial in \(\mathbb{Z}[x_1,x_2,\ldots,x_n]\) is to factor a univariate image in \(\mathbb{Z}[x_1]\) then recover the multivariate factors from their images using a process known as multivariate Hensel lifting. For the case when the factors are expected to be sparse, at CASC 2016, we introduced a new approach which uses sparse polynomial interpolation to solve the multivariate polynomial diophantine equations that arise inside Hensel lifting.
In this work we extend our previous work to the case when the number of factors to be computed is more than 2. Secondly, for the case where the integer coefficients of the factors are large we develop an efficient \(p\)-adic method. We will argue that the probabilistic sparse interpolation method introduced by us provides new options to speed up the factorization for these two cases. Finally we present some experimental data comparing our new methods with previous methods.
For the entire collection see [Zbl 1396.68014].Splitting permutation representations of finite groups by polynomial algebra methods.https://www.zbmath.org/1453.200222021-02-27T13:50:00+00:00"Kornyak, Vladimir V."https://www.zbmath.org/authors/?q=ai:kornyak.vladimir-vSummary: An algorithm for splitting permutation representations of a finite group over fields of characteristic zero into irreducible components is described. The algorithm is based on the fact that the components of the invariant inner product in invariant subspaces are operators of projection into these subspaces. An important part of the algorithm is the solution of systems of quadratic equations. A preliminary implementation of the algorithm splits representations up to dimensions of hundreds of thousands. Examples of computations are given in the appendix.
For the entire collection see [Zbl 1396.68014].A purely functional computer algebra system embedded in Haskell.https://www.zbmath.org/1453.680362021-02-27T13:50:00+00:00"Ishii, Hiromi"https://www.zbmath.org/authors/?q=ai:ishii.hiromiSummary: We demonstrate how methods in functional programming can be used to implement a computer algebra system. As a proof-of-concept, we present the computational-algebra package. It is a computer algebra system implemented as an embedded domain-specific language in Haskell, a purely functional programming language. Utilising methods in functional programming and prominent features of Haskell, this library achieves safety, composability, and correctness at the same time. To demonstrate the advantages of our approach, we have implemented advanced Gröbner basis algorithms, such as Faugère's \(F_4\) and \(F_5\), in a composable way.
For the entire collection see [Zbl 1396.68014].Qualitative analysis of a dynamical system with irrational first integrals.https://www.zbmath.org/1453.340802021-02-27T13:50:00+00:00"Irtegov, Valentin"https://www.zbmath.org/authors/?q=ai:irtegov.valentin-dmitrievich"Titorenko, Tatiana"https://www.zbmath.org/authors/?q=ai:titorenko.tatiana-nSummary: We conduct qualitative analysis for a completely integrable system of differential equations with irrational first integrals. These equations originate from gas dynamics and describe adiabatical motions of a compressible gas cloud with homogeneous deformation. We study the mechanical analog of this gas dynamical system - the rotational motion of a spheroidal rigid body around a fixed point in a potential force field described by an irrational function. Within our study, equilibria, pendulum oscillations and invariant manifolds, which these solutions belong to, have been found. The sufficient conditions of their stability in Lyapunov's sense have been derived and compared with the necessary ones. The analysis has been performed with the aid of computer algebra tools which proved to be essential. The computer algebra system ``Mathematica'' was employed.
For the entire collection see [Zbl 1396.68014].Positive solutions of systems of signed parametric polynomial inequalities.https://www.zbmath.org/1453.260122021-02-27T13:50:00+00:00"Hong, Hoon"https://www.zbmath.org/authors/?q=ai:hong.hoon"Sturm, Thomas"https://www.zbmath.org/authors/?q=ai:sturm.thomas-f|sturm.thomas-pSummary: We consider systems of strict multivariate polynomial inequalities over the reals. All polynomial coefficients are parameters ranging over the reals, where for each coefficient we prescribe its sign. We are interested in the existence of positive real solutions of our system for all choices of coefficients subject to our sign conditions. We give a decision procedure for the existence of such solutions. In the positive case our procedure yields a parametric positive solution as a rational function in the coefficients. Our framework allows to reformulate heuristic subtropical approaches for non-parametric systems of polynomial inequalities that have been recently used in qualitative biological network analysis and, independently, in satisfiability modulo theory solving. We apply our results to characterize the incompleteness of those methods.
For the entire collection see [Zbl 1396.68014].Finding multiple solutions in nonlinear integer programming with algebraic test-sets.https://www.zbmath.org/1453.901032021-02-27T13:50:00+00:00"Hartillo, M. I."https://www.zbmath.org/authors/?q=ai:hartillo.m-i"Jiménez-Cobano, J. M."https://www.zbmath.org/authors/?q=ai:jimenez-cobano.jose-manuel"Ucha, J. M."https://www.zbmath.org/authors/?q=ai:ucha-enriquez.jose-mariaSummary: We explain how to compute all the solutions of a nonlinear integer problem using the algebraic test-sets associated to a suitable linear subproblem. These test-sets are obtained using Gröbner bases. The main advantage of this method, compared to other available alternatives, is its exactness within a quite good efficiency.
For the entire collection see [Zbl 1396.68014].Symbolic-numeric simulation of satellite dynamics with aerodynamic attitude control system.https://www.zbmath.org/1453.700122021-02-27T13:50:00+00:00"Gutnik, Sergey A."https://www.zbmath.org/authors/?q=ai:gutnik.sergey-a"Sarychev, Vasily A."https://www.zbmath.org/authors/?q=ai:sarychev.vasily-aSummary: The dynamics of the rotational motion of a satellite, subjected to the action of gravitational, aerodynamic and damping torques in a circular orbit is investigated. Our approach combines methods of symbolic study of the nonlinear algebraic system that determines equilibrium orientations of a satellite under the action of the external torques and numerical integration of the system of linear ordinary differential equations describing the dynamics of the satellite. An algorithm for the construction of a Gröbner basis was implemented for determining the equilibria of the satellite for specified values of the aerodynamic torque, damping coefficients, and principal central moments of inertia. Both the conditions of the satellite's equilibria existence and the conditions of asymptotic stability of these equilibria were obtained. The transition decay processes of the spatial oscillations of the satellite for various system parameters have also been studied.
For the entire collection see [Zbl 1396.68014].Symbolic-numerical algorithms for solving elliptic boundary-value problems using multivariate simplex Lagrange elements.https://www.zbmath.org/1453.654062021-02-27T13:50:00+00:00"Gusev, A. A."https://www.zbmath.org/authors/?q=ai:gusev.alexander-a"Gerdt, V. P."https://www.zbmath.org/authors/?q=ai:gerdt.vladimir-p"Chuluunbaatar, O."https://www.zbmath.org/authors/?q=ai:chuluunbaatar.ochbadrakh"Chuluunbaatar, G."https://www.zbmath.org/authors/?q=ai:chuluunbaatar.g"Vinitsky, S. I."https://www.zbmath.org/authors/?q=ai:vinitsky.sergue-i"Derbov, V. L."https://www.zbmath.org/authors/?q=ai:derbov.vladimir-l"Góźdź, A."https://www.zbmath.org/authors/?q=ai:gozdz.andrzej"Krassovitskiy, P. M."https://www.zbmath.org/authors/?q=ai:krassovitskiy.p-mSummary: We propose new symbolic-numerical algorithms implemented in Maple-Fortran environment for solving the self-adjoint elliptic boundary-value problem in a \(d\)-dimensional polyhedral finite domain, using the high-accuracy finite element method with multivariate Lagrange elements in the simplexes. The high-order fully symmetric PI-type Gaussian quadratures with positive weights and no points outside the simplex are calculated by means of the new symbolic-numerical algorithms implemented in Maple. Quadrature rules up to order 8 on the simplexes with dimension \(d=3\text{-}6\) are presented. We demonstrate the efficiency of algorithms and programs by benchmark calculations of a low part of spectra of exactly solvable Helmholtz problems for a cube and a hypercube.
For the entire collection see [Zbl 1396.68014].A uniform cube-free morphism is \(k\)-power-free for all integers \(k\geq 4\).https://www.zbmath.org/1453.681472021-02-27T13:50:00+00:00"Wlazinski, Francis"https://www.zbmath.org/authors/?q=ai:wlazinski.francisSummary: In the study of \(k\)-power-free morphisms, the case of 3-free-morphisms, i.e., cube-free morphisms, often differs from other \(k\)-power-free morphisms. Indeed, cube-freeness is less restrictive than square-freeness. And a cube provides less equations to solve than any integer \(k\geq 4\). Anyway, the fact that the image of a word by a morphism contains a cube implies relations that, under some assumptions, allow us to establish our main result: a cube-free uniform morphism is a \(k\)-power-free morphism for all integers \(k\geq 4\).On a polytime factorization algorithm for multilinear polynomials over \(\mathbb{F}_2\).https://www.zbmath.org/1453.120012021-02-27T13:50:00+00:00"Emelyanov, Pavel"https://www.zbmath.org/authors/?q=ai:emelyanov.pavel-g"Ponomaryov, Denis"https://www.zbmath.org/authors/?q=ai:ponomaryov.denis-kSummary: In 2010, \textit{A. Shpilka} and \textit{I. Volkovich} [ICALP 2010, Lect. Notes Comput. Sci. 6198, 408--419 (2010; Zbl 1288.68252)] established a prominent result on the equivalence of polynomial factorization and identity testing. It follows from their result that a multilinear polynomial over the finite field of order 2 can be factored in time cubic in the size of the polynomial given as a string. Later, we have rediscovered this result and provided a simple factorization algorithm based on computations over derivatives of multilinear polynomials. The algorithm has been applied to solve problems of compact representation of various combinatorial structures, including Boolean functions and relational data tables. In this paper, we describe an improvement of this factorization algorithm and report on preliminary experimental analysis.
For the entire collection see [Zbl 1396.68014].Selection of auxiliary objectives using landscape features and offline learned classifier.https://www.zbmath.org/1453.682312021-02-27T13:50:00+00:00"Bassin, Anton"https://www.zbmath.org/authors/?q=ai:bassin.anton"Buzdalova, Arina"https://www.zbmath.org/authors/?q=ai:buzdalova.arinaSummary: In order to increase the performance of an evolutionary algorithm, additional auxiliary optimization objectives may be added. It is hard to predict which auxiliary objectives will be the most efficient at different stages of optimization. Thus, the problem of dynamic selection between auxiliary objectives appears. This paper proposes a new method for efficient selection of auxiliary objectives, which uses fitness landscape information and problem meta-features. An offline learned meta-classifier is used to dynamically predict the most efficient auxiliary objective during the main optimization run performed by an evolutionary algorithm. An empirical evaluation on two benchmark combinatorial optimization problems (Traveling Salesman and Job Shop Scheduling problems) shows that the proposed approach outperforms similar known methods of auxiliary objective selection.
For the entire collection see [Zbl 1360.68014].Symbolic algorithm for generating the orthonormal Bargmann-Moshinsky basis for \(\mathrm{SU}(3)\) group.https://www.zbmath.org/1453.650812021-02-27T13:50:00+00:00"Deveikis, A."https://www.zbmath.org/authors/?q=ai:deveikis.algirdas"Gusev, A. A."https://www.zbmath.org/authors/?q=ai:gusev.alexander-a"Gerdt, V. P."https://www.zbmath.org/authors/?q=ai:gerdt.vladimir-p"Vinitsky, S. I."https://www.zbmath.org/authors/?q=ai:vinitsky.sergue-i"Góźdź, A."https://www.zbmath.org/authors/?q=ai:gozdz.andrzej"Pȩdrak, A."https://www.zbmath.org/authors/?q=ai:pedrak.aSummary: A symbolic algorithm which can be implemented in any computer algebra system for generating the Bargmann-Moshinsky (BM) basis with the highest weight vectors of \(\mathrm{SU}(3)\) irreducible representations is presented. The effective method resulting in analytical formula of overlap integrals in the case of the non-canonical BM basis [\textit{S. Alisauskas}, \textit{P. Raychev} and \textit{R. Roussev}, ``Analytical form of the orthonormal basis of the decomposition \(\mathrm{SU}(3)\supset \mathrm{O}(3)\supset \mathrm{O}(2)\) for some \((\lambda,\mu)\) multiplets'', J. Phys. G, Nucl. Phys. 7, 1213--1226 (1981; \url{doi:10.1088/0305-4616/7/9/013})] is used. A symbolic recursive algorithm for orthonormalisation of the obtained basis is developed. The effectiveness of the algorithms implemented in Mathematica 10.1 is investigated by calculation of the overlap integrals for up to \(\mu=5\) with \(\lambda>\mu\) and orthonormalization of the basis for up to \(\mu=4\) with \(\lambda>\mu\). The action of the zero component of the quadrupole operator onto the basis vectors with \(\mu=4\) is also obtained.
For the entire collection see [Zbl 1396.68014].From exponential analysis to Padé approximation and tensor decomposition, in one and more dimensions.https://www.zbmath.org/1453.410042021-02-27T13:50:00+00:00"Cuyt, Annie"https://www.zbmath.org/authors/?q=ai:cuyt.annie"Knaepkens, Ferre"https://www.zbmath.org/authors/?q=ai:knaepkens.ferre"Lee, Wen-shin"https://www.zbmath.org/authors/?q=ai:lee.wen-shinSummary: Exponential analysis in signal processing is essentially what is known as sparse interpolation in computer algebra. We show how exponential analysis from regularly spaced samples is reformulated as Padé approximation from approximation theory and tensor decomposition from multilinear algebra.{
}The univariate situation is briefly recalled and discussed in Sect. 1. The new connections from approximation theory and tensor decomposition to the multivariate generalization are the subject of Sect. 2. These connections immediately allow for some generalization of the sampling scheme, not covered by the current multivariate theory.{
}An interesting computational illustration of the above in blind source separation is presented in Sect. 3.
For the entire collection see [Zbl 1396.68014].A continuation method for visualizing planar real algebraic curves with singularities.https://www.zbmath.org/1453.650402021-02-27T13:50:00+00:00"Chen, Changbo"https://www.zbmath.org/authors/?q=ai:chen.changbo"Wu, Wenyuan"https://www.zbmath.org/authors/?q=ai:wu.wenyuanSummary: We present a new method for visualizing planar real algebraic curves inside a bounding box based on numerical continuation and critical point methods. Since the topology of the curve near a singular point is not numerically stable, we trace the curve only outside neighborhoods of singular points and replace each neighborhood simply by a point, which produces a polygonal approximation that is \(\epsilon\)-close to the curve. Such an approximation is more stable for defining the numerical connectedness of the complement of the curve, which is important for applications such as solving bi-parametric polynomial systems.{
}The algorithm starts by computing three types of key points of the curve, namely the intersection of the curve with small circles centered at singular points, regular critical points of every connected component of the curve, as well as intersection points of the curve with the given bounding box. It then traces the curve starting with and in the order of the above three types of points. This basic scheme is further enhanced by several optimizations, such as grouping singular points in natural clusters and tracing the curve by a try-and-resume strategy. The effectiveness of the algorithm is illustrated by numerous examples.
For the entire collection see [Zbl 1396.68014].Symbolic-numeric methods for nonlinear integro-differential modeling.https://www.zbmath.org/1453.651632021-02-27T13:50:00+00:00"Boulier, François"https://www.zbmath.org/authors/?q=ai:boulier.francois"Castel, Hélène"https://www.zbmath.org/authors/?q=ai:castel.helene"Corson, Nathalie"https://www.zbmath.org/authors/?q=ai:corson.nathalie"Lanza, Valentina"https://www.zbmath.org/authors/?q=ai:lanza.valentina"Lemaire, François"https://www.zbmath.org/authors/?q=ai:lemaire.francois"Poteaux, Adrien"https://www.zbmath.org/authors/?q=ai:poteaux.adrien"Quadrat, Alban"https://www.zbmath.org/authors/?q=ai:quadrat.alban"Verdière, Nathalie"https://www.zbmath.org/authors/?q=ai:verdiere.nathalieSummary: This paper presents a proof of concept for symbolic and numeric methods dedicated to the parameter estimation problem for models formulated by means of nonlinear integro-differential equations (IDE). In particular, we address: the computation of the model input-output equation and the numerical integration of IDE systems.
For the entire collection see [Zbl 1396.68014].A strongly consistent finite difference scheme for steady Stokes flow and its modified equations.https://www.zbmath.org/1453.761222021-02-27T13:50:00+00:00"Blinkov, Yury A."https://www.zbmath.org/authors/?q=ai:blinkov.yury-a"Gerdt, Vladimir P."https://www.zbmath.org/authors/?q=ai:gerdt.vladimir-p"Lyakhov, Dmitry A."https://www.zbmath.org/authors/?q=ai:lyakhov.dmitry-a"Michels, Dominik L."https://www.zbmath.org/authors/?q=ai:michels.dominik-ludewigSummary: We construct and analyze a strongly consistent second-order finite difference scheme for the steady two-dimensional Stokes flow. The pressure Poisson equation is explicitly incorporated into the scheme. Our approach suggested by the first two authors is based on a combination of the finite volume method, difference elimination, and numerical integration. We make use of the techniques of the differential and difference Janet/Gröbner bases. In order to prove strong consistency of the generated scheme we correlate the differential ideal generated by the polynomials in the Stokes equations with the difference ideal generated by the polynomials in the constructed difference scheme. Additionally, we compute the modified differential system of the obtained scheme and analyze the scheme's accuracy and strong consistency by considering this system. An evaluation of our scheme against the established marker-and-cell method is carried out.
For the entire collection see [Zbl 1396.68014].Computation of Pommaret bases using syzygies.https://www.zbmath.org/1453.130752021-02-27T13:50:00+00:00"Binaei, Bentolhoda"https://www.zbmath.org/authors/?q=ai:binaei.bentolhoda"Hashemi, Amir"https://www.zbmath.org/authors/?q=ai:hashemi.amir"Seiler, Werner M."https://www.zbmath.org/authors/?q=ai:seiler.werner-mSummary: We investigate the application of syzygies for efficiently computing (finite) Pommaret bases. For this purpose, we first describe a non-trivial variant of Gerdt's algorithm [\textit{V. P. Gerdt}, in: Computational commutative and non-commutative algebraic geometry. Proceedings of the NATO Advanced Research Workshop, Chisinau, Republic of Moldova, 2004. Amsterdam: IOS Press, 199--225 (2005; Zbl 1104.13012)] to construct an involutive basis for the input ideal as well as an involutive basis for the syzygy module of the output basis. Then we apply this new algorithm in the context of Seiler's method to transform a given ideal into quasi stable position to ensure the existence of a finite Pommaret basis [\textit{W. M. Seiler},
Appl. Algebra Eng. Commun. Comput. 20, No. 3--4, 261--338 (2009; Zbl 1175.13011)]. This new approach allows us to avoid superfluous reductions in the iterative computation of Janet bases required by this method. We conclude the paper by proposing an involutive variant of the signature based algorithm of \textit{S. Gao} et al. [Math. Comput. 85, No. 297, 449--465 (2016; Zbl 1331.13018)] to compute simultaneously a Gröbner basis for a given ideal and for the syzygy module of the input basis. All the presented algorithms have been implemented in Maple and their performance is evaluated via a set of benchmark ideals.
For the entire collection see [Zbl 1396.68014].Sparse polynomial arithmetic with the BPAS library.https://www.zbmath.org/1453.654702021-02-27T13:50:00+00:00"Asadi, Mohammadali"https://www.zbmath.org/authors/?q=ai:asadi.mohammadali"Brandt, Alexander"https://www.zbmath.org/authors/?q=ai:brandt.alexander"Moir, Robert H. C."https://www.zbmath.org/authors/?q=ai:moir.robert-h-c"Moreno Maza, Marc"https://www.zbmath.org/authors/?q=ai:moreno-maza.marcSummary: We discuss algorithms for pseudo-division and division with remainder of multivariate polynomials with sparse representation. This work is motivated by the computations of normal forms and pseudo-remainders with respect to regular chains. We report on the implementation of those algorithms with the BPAS library.
For the entire collection see [Zbl 1396.68014].Proof-of-work certificates that can be efficiently computed in the cloud (invited talk).https://www.zbmath.org/1453.680862021-02-27T13:50:00+00:00"Dumas, Jean-Guillaume"https://www.zbmath.org/authors/?q=ai:dumas.jean-guillaumeSummary: In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a cloud-based service. There, demanding computations are outsourced in order to limit infrastructure costs.{
}The idea of verifiable computing is to associate a data structure, a proof-of-work certificate, to the result of the outsourced computation. This allows a verification algorithm to prove the validity of the result, faster than by recomputing it. We talk about a prover (the server performing the computations) and a verifier.{
}Goldwasser, Kalai and Rothblum gave in 2008 a generic method to verify any parallelizable computation, in almost linear time in the size of the, potentially structured, inputs and the result. However, the extra cost of the computations for the prover (and therefore the extra cost to the customer), although only almost a constant factor of the overall work, is nonetheless prohibitive in practice.{
}Differently, we will here present problem-specific procedures in computer algebra, e.g. for exact linear algebra computations, that are prover-optimal, that is that have much less financial overhead.
For the entire collection see [Zbl 1396.68014].The axiomatic approach and the Internet.https://www.zbmath.org/1453.910502021-02-27T13:50:00+00:00"Tennenholtz, Moshe"https://www.zbmath.org/authors/?q=ai:tennenholtz.moshe"Zohar, Aviv"https://www.zbmath.org/authors/?q=ai:zohar.avivSummary: The Internet reinforces types of multiagent systems which can effectively utilize this new media. Reputation systems, ranking systems, trust systems, recommendation systems, affiliate marketing in social networks, and more, are flowering in its midst. This recent wave of online social systems is typically associated with a large amount of data that is collected online, which leads to the ``big data'' approach to the utilization of such information. Quite surprisingly, however, the abundance of available data does not help system designers to come up with the right design for online systems in the first place. Indeed, available data is typically generated by the use of a particular system, and mining the data generated by users while interacting with one system does not provide a tool for exploring the overwhelmingly large design space. Interestingly, the main practical approach to software and hardware design, the formal specification of clear system requirements and the implementation of a system satisfying these exact requirements, has not been used often. This classical approach, when adapted to the context of multiagent systems, coincides with extensions of a standard tool of social choice theory and cooperative game theory, namely, the axiomatic approach.
The use of axioms can be either in a descriptive or in a normative context. In both cases basic properties of a system are phrased as axioms. In the descriptive approach a basic set of properties of a known system is sought out, with the aim of finding one that characterizes the system uniquely, hence teaching us much about its essence. The normative approach, on the other hand, starts from a specification of a basic set of requirements which is followed by an attempt to construct a system that satisfies them. Our goal in this chapter is to demonstrate both approaches in the context of the Internet. Perhaps the best known axiomatic theory in the social sciences is the theory of social choice. In the classical social choice setting we have a set of voters and a set of alternatives, where each voter has a ranking over the set of alternatives, and our aim is to find a good aggregation of the individual rankings into a global (or social) ranking. Various properties of such aggregation functions have been considered and have led to different characterizations of particular systems as well as impossibility results showing no system can satisfy certain sets of properties all at once.
For the entire collection see [Zbl 1436.91001].Data-driven polynomial chaos expansion for machine learning regression.https://www.zbmath.org/1453.625652021-02-27T13:50:00+00:00"Torre, Emiliano"https://www.zbmath.org/authors/?q=ai:torre.emiliano"Marelli, Stefano"https://www.zbmath.org/authors/?q=ai:marelli.stefano"Embrechts, Paul"https://www.zbmath.org/authors/?q=ai:embrechts.paul"Sudret, Bruno"https://www.zbmath.org/authors/?q=ai:sudret.brunoSummary: We present a regression technique for data-driven problems based on polynomial chaos expansion (PCE). PCE is a popular technique in the field of uncertainty quantification (UQ), where it is typically used to replace a runnable but expensive computational model subject to random inputs with an inexpensive-to-evaluate polynomial function. The metamodel obtained enables a reliable estimation of the statistics of the output, provided that a suitable probabilistic model of the input is available. Machine learning (ML) regression is a research field that focuses on providing purely data-driven input-output maps, with the focus on pointwise prediction accuracy. We show that a PCE metamodel purely trained on data can yield pointwise predictions whose accuracy is comparable to that of other ML regression models, such as neural networks and support vector machines. The comparisons are performed on benchmark datasets available from the literature. The methodology also enables the quantification of the output uncertainties, and is robust to noise. Furthermore, it enjoys additional desirable properties, such as good performance for small training sets and simplicity of construction, with only little parameter tuning required.A refined speed limit for the imaginary-time Schrödinger equation.https://www.zbmath.org/1453.810102021-02-27T13:50:00+00:00"Sun, Jie"https://www.zbmath.org/authors/?q=ai:sun.jie.2"Lu, Songfeng"https://www.zbmath.org/authors/?q=ai:lu.songfengIncomplete information and communication in voting.https://www.zbmath.org/1453.910492021-02-27T13:50:00+00:00"Boutilier, Craig"https://www.zbmath.org/authors/?q=ai:boutilier.craig"Rosenschein, Jeffrey S."https://www.zbmath.org/authors/?q=ai:rosenschein.jeffrey-sSummary: Many voting schemes (and other social choice mechanisms) make stringent assumptions about the preference information provided by voters, as well as other aspects of the choice situation. Realizing these assumptions in practice often imposes an undesirable and unnecessary burden on both voters and mechanism designers with respect to the provision of such information. This chapter provides an overview of a variety of topics related to the information and communication requirements of voting. One theme underlying much of the work discussed in this chapter is its focus on determining winners or making decisions with incomplete or stochastic information about voter preferences -- or in some cases, about the alternatives themselves. This includes work on the computational complexity of determining possible/necessary winners and regret-based winner determination; the query or communication complexity of eliciting preferences; practical schemes for preference elicitation; winner determination under alternative uncertainty; the sample complexity of learning voting rules; and compilation complexity.
For the entire collection see [Zbl 1436.91001].Software reliability models incorporating testing effort.https://www.zbmath.org/1453.900542021-02-27T13:50:00+00:00"Fiondella, Lance"https://www.zbmath.org/authors/?q=ai:fiondella.lance"Gokhale, Swapna S."https://www.zbmath.org/authors/?q=ai:gokhale.swapna-sSummary: Explicitly relating the effectiveness of fault detection to the effort expended in testing, achieved by incorporating testing effort into software reliability models has been the focus of many research efforts. Although the literature is replete with these ``testing effort models,'' their development appears to be ad hoc and disconnected. The objective of this survey is to propose a framework to classify testing effort models, aimed at identifying their commonalities and highlighting their differences. We conclude the article with a brief discussion of the limitations of the prevalent works in this domain, which also identify directions for future research.Data-driven software reliability and availability modeling and prediction.https://www.zbmath.org/1453.900642021-02-27T13:50:00+00:00"Zhang, Xuemei"https://www.zbmath.org/authors/?q=ai:zhang.xuemei"Pham, Hoang"https://www.zbmath.org/authors/?q=ai:pham.hoangSummary: Traditional software deployment readiness criteria, such as ``zero severity one defects'', do not provide any indication of how reliable the product will be in the field. In this paper, we propose a software reliability prediction framework to achieve data-driven, customer focused reliability and availability assessment throughout the entire development life cycle. Focusing on front-end reliability and availability improvement, the framework starts with availability evaluations as early as the architecture design phases. Markov-based architecture reliability models are used to study the failure and failure recovery mechanisms of the systems and solutions. These early evaluations can help architecture design, reliability requirement setting and reliability budget allocation. The early phase models and predictions can be updated as testing data becomes available. Software reliability growth models (SRGMs) are used to estimate one of the most influential parameters, i.e., the failure rates of software. Estimation of other reliability parameters, such as coverage factor, silent failure detection times and recovery durations and success probabilities are also discussed in this paper. This framework also calibrates test data with field observations, and thus forms a close-loop approach to evaluate the reliability and availability of the software product to verify that the product meet specific reliability expectation.A unified modeling framework incorporating change-point for measuring reliability growth daring software testing.https://www.zbmath.org/1453.900562021-02-27T13:50:00+00:00"Kapur, P. K."https://www.zbmath.org/authors/?q=ai:kapur.p-k"Kumar, Jyotish"https://www.zbmath.org/authors/?q=ai:kumar.jyotish"Kumar, Ravi"https://www.zbmath.org/authors/?q=ai:kumar.ravi-shankar|kumar.ravi-kSummary: Reliability of software often depends considerably on the quality of software testing. By assessing reliability we can also judge the quality of testing. Alternately, reliability estimation can be used to decide whether enough testing has been done. Hence, besides characterizing an important quality property of the product being delivered, reliability estimation has a direct role in project management-the reliability models being used by the project manager to decide when to stop testing \textit{Jalote, Pankaj} [An integrated approach to software engineering. New Delhi: Narosa Publication (2005; \url{doi:10.1007/0-387-28132-0})]. A plethora of software reliability growth models (SRGM) have been developed during the last three decades. Various software development environment and assumptions have been incorporated during the development of these models. From our studies, many existing SRGM can be unified under a more general formulation. In fact, model unification is an insightful investigation for the study of general models without making many assumptions. In the literature various software reliability models have been proposed incorporating change-point concept. To the best of our knowledge these models have been developed separately. In this paper we propose a general framework for deriving several software reliability growth models with change-point concept based on non-homogeneous Poisson process (NHPP). Some existing change-point models along with three new models have been derived from the proposed general framework. The models derived have been validated and verified using real data sets. Estimated Parameters and comparison criteria results have also been presented.An empirical study of stochastic differential equation models based on component importance level for open source software.https://www.zbmath.org/1453.900632021-02-27T13:50:00+00:00"Yamada, Shigeru"https://www.zbmath.org/authors/?q=ai:yamada.shigeru"Tamura, Yoshinobu"https://www.zbmath.org/authors/?q=ai:tamura.yoshinobuSummary: Network technologies become increasingly more complex in a wide sphere. Especially, open source software systems which serve as key components of critical infrastructures in the society are still ever-expanding now. In this paper, we propose a new approach to software reliability assessment by creating a fusion of neural network and stochastic differential equations based on component importance levels. Also, we analyze actual software fault-count data to show numerical examples of software reliability assessment considering component importance levels for an open source software. Moreover, we compare the goodness-of-fit of the proposed models with the conventional software reliability growth model for OSS.Hereditary history-preserving bisimilarity: logics and automata.https://www.zbmath.org/1453.681102021-02-27T13:50:00+00:00"Baldan, Paolo"https://www.zbmath.org/authors/?q=ai:baldan.paolo"Crafa, Silvia"https://www.zbmath.org/authors/?q=ai:crafa.silviaSummary: We study hereditary history-preserving (hhp-) bisimilarity, a canonical behavioural equivalence in the true concurrent spectrum, by means of logics and automata. We first show that hhp-bisimilarity on prime event structures can be characterised in terms of a simple logic whose formulae just observe events in computations and check their executability. The logic suggests a characterisation of hhp-bisimilarity based on history-dependent automata, a formalism for modelling systems with dynamic allocation and deallocation of resources, where the history of resources is traced over time. Prime event structures can be naturally mapped into history-dependent automata in a way that hhp-bisimilarity exactly corresponds to the canonical behavioural equivalence for history-dependent automata.
For the entire collection see [Zbl 1327.68021].Call-by-value in a basic logic for interaction.https://www.zbmath.org/1453.680292021-02-27T13:50:00+00:00"Schöpp, Ulrich"https://www.zbmath.org/authors/?q=ai:schopp.ulrichSummary: In game semantics and related approaches to programming language semantics, programs are modelled by interaction dialogues. Such models have recently been used by a number of authors for the design of compilation methods, in particular for applications where resource control is important. The work in this area has focused on call-by-name languages. In this paper we study the compilation of call-by-value into a first-order low-level language by means of an interpretation in a semantic interactive model. We refine the methods developed for call-by-name languages to allow an efficient treatment of call-by-value. We introduce an intermediate language that is based on the structure of an interactive computation model and that can be seen as a fragment of linear logic. The main result is that Plotkin's call-by-value \textsc{cps}-translation and its soundness proof can be refined to target this intermediate language. This refined \textsc{cps}-translation amounts to a direct compilation of the source language into a first-order language.
For the entire collection see [Zbl 1327.68021].Syntax-directed divide-and-conquer data-flow analysis.https://www.zbmath.org/1453.680552021-02-27T13:50:00+00:00"Sato, Shigeyuki"https://www.zbmath.org/authors/?q=ai:sato.shigeyuki"Morihata, Akimasa"https://www.zbmath.org/authors/?q=ai:morihata.akimasaSummary: Link-time optimization, with which GCC and LLVM are equipped, generally deals with large-scale procedures because of aggressive procedure inlining. Data-flow analysis (DFA), which is an essential computation for compiler optimization, is therefore desired to deal with large-scale procedures. One promising approach to the DFA of large-scale procedures is divide-and-conquer parallelization. However, DFA on control-flow graphs is difficult to divide and conquer. If we perform DFA on abstract syntax trees (ASTs) in a syntax-directed manner, the divide and conquer of DFA becomes straightforward, owing to the recursive structure of ASTs, but then nonstructural control flow such as goto/label becomes a problem. In order to resolve it, we have developed a novel syntax-directed method of DFA on ASTs that can deal with goto/label and is ready to divide-and-conquer parallelization. We tested the feasibility of our method experimentally through prototype implementations and observed that our prototype achieved a significant speedup.
For the entire collection see [Zbl 1327.68021].Inferring grammatical summaries of string values.https://www.zbmath.org/1453.680492021-02-27T13:50:00+00:00"Kim, Se-Won"https://www.zbmath.org/authors/?q=ai:kim.se-won"Chin, Wooyoung"https://www.zbmath.org/authors/?q=ai:chin.wooyoung"Park, Jimin"https://www.zbmath.org/authors/?q=ai:park.jimin"Kim, Jeongmin"https://www.zbmath.org/authors/?q=ai:kim.jeongmin"Ryu, Sukyoung"https://www.zbmath.org/authors/?q=ai:ryu.sukyoungSummary: We present a new kind of static analysis that infers grammatical summaries of string values. We are given a context-free grammar and a program which contains string expressions whose values should be partial sentences of the grammar. A grammatical summary of an expression is a vocabulary string of the grammar that derives all the possible string values of the expression. Our analysis automatically finds out such grammatical summaries. We design the analysis using abstract interpretation framework making it pluggable into conventional data-flow analysis frameworks.{
}In addition to the theoretical foundation of the analysis, we present how we make the analysis computable and tractable. While inferring grammatical summaries of a string expression often results in an infinite number of summaries, we make the inference computable by using a CFL-reachability algorithm and finite state automata representation. Additionally, we make the analysis more tractable by several optimization techniques such as keeping only relevant summaries and using two-level grammars. These techniques achieve huge speedup in our experiments.
For the entire collection see [Zbl 1327.68021].A ZDD-based efficient higher-order model checking algorithm.https://www.zbmath.org/1453.681032021-02-27T13:50:00+00:00"Terao, Taku"https://www.zbmath.org/authors/?q=ai:terao.taku"Kobayashi, Naoki"https://www.zbmath.org/authors/?q=ai:kobayashi.naokiSummary: The model checking of higher-order recursion schemes, aka. higher-order model checking, has recently been applied to automated verification of higher-order programs. Despite its extremely high worst-case complexity, practical algorithms have been developed that work well for typical inputs that arise in program verification. Even the state-of-the-art algorithms are, however, not scalable enough for verification of thousands or millions of lines of programs. We, therefore, propose a new higher-order model checking algorithm. It is based on Broadbent and Kobayashi's type and saturation-based algorithm \textsc{HorSat}, but we make two significant modifications. First, unlike \textsc{HorSat}, we collect flow information (which is necessary for optimization) in linear time by using a sub-transitive flow graph. Thanks to this, the resulting algorithm runs in almost linear time under a fixed-parameter assumption. Secondly, we employ zero-suppressed binary decision diagrams to efficiently represent and propagate type information. We have confirmed through experiments that the new algorithm is more scalable for several families of inputs than the state-of-the-art higher-order model checkers \textsc{HorSat} and Preface.
For the entire collection see [Zbl 1327.68021].Automatic constrained rewriting induction towards verifying procedural programs.https://www.zbmath.org/1453.680502021-02-27T13:50:00+00:00"Kop, Cynthia"https://www.zbmath.org/authors/?q=ai:kop.cynthia"Nishida, Naoki"https://www.zbmath.org/authors/?q=ai:nishida.naokiSummary: This paper aims at developing a verification method for procedural programs via a transformation into logically constrained term rewriting systems (LCTRSs). To this end, we adapt existing rewriting induction methods to LCTRSs and propose a simple yet effective method to generalize equations. We show that we can handle realistic functions, involving, e.g., integers and arrays. An implementation is provided.
For the entire collection see [Zbl 1327.68021].Compositional entailment checking for a fragment of separation logic.https://www.zbmath.org/1453.680462021-02-27T13:50:00+00:00"Enea, Constantin"https://www.zbmath.org/authors/?q=ai:enea.constantin"Lengál, Ondřej"https://www.zbmath.org/authors/?q=ai:lengal.ondrej"Sighireanu, Mihaela"https://www.zbmath.org/authors/?q=ai:sighireanu.mihaela"Vojnar, Tomáš"https://www.zbmath.org/authors/?q=ai:vojnar.tomasSummary: We present a (semi-)decision procedure for checking entailment between separation logic formulas with inductive predicates specifying complex data structures corresponding to finite nesting of various kinds of linked lists: acyclic or cyclic, singly or doubly linked, skip lists, etc. The decision procedure is compositional in the sense that it reduces the problem of checking entailment between two arbitrary formulas to the problem of checking entailment between a formula and an atom. Subsequently, in case the atom is a predicate, we reduce the entailment to testing membership of a tree derived from the formula in the language of a tree automaton derived from the predicate. We implemented this decision procedure and tested it successfully on verification conditions obtained from programs using singly and doubly linked nested lists as well as skip lists.
For the entire collection see [Zbl 1327.68021].Simple and efficient algorithms for octagons.https://www.zbmath.org/1453.680442021-02-27T13:50:00+00:00"Chawdhary, Aziem"https://www.zbmath.org/authors/?q=ai:chawdhary.aziem"Robbins, Ed"https://www.zbmath.org/authors/?q=ai:robbins.edward-henry|robbins.edward-l"King, Andy"https://www.zbmath.org/authors/?q=ai:king.andySummary: The numerical domain of octagons can be viewed as an exercise in simplicity: it trades expressiveness for efficiency and ease of implementation. The domain can represent unary and binary constraints where the coefficients are \(+1\) or \(-1\), so called octagonal constraints, and comes with operations that have cubic complexity. The central operation is closure which computes a canonical form by deriving all implied octagonal constraints from a given octagonal system. This paper investigates the role of incrementality, namely closing a system where only one constraint has been changed, which is a dominating use-case. We present two new incremental algorithms for closure both of which are conceptually simple and computationally efficient, and argue their correctness.
For the entire collection see [Zbl 1327.68021].Resource analysis of complex programs with cost equations.https://www.zbmath.org/1453.680472021-02-27T13:50:00+00:00"Flores-Montoya, Antonio"https://www.zbmath.org/authors/?q=ai:flores-montoya.antonio"Hähnle, Reiner"https://www.zbmath.org/authors/?q=ai:hahnle.reinerSummary: We present a novel static analysis for inferring precise complexity bounds of imperative and recursive programs. The analysis operates on cost equations. Therefore, it permits uniform treatment of loops and recursive procedures. The analysis is able to provide precise upper bounds for programs with complex execution flow and multi-dimensional ranking functions. In a first phase, a combination of control-flow refinement and invariant generation creates a representation of the possible behaviors of a (possibly inter-procedural) program in the form of a set of execution patterns. In a second phase, a cost upper bound of each pattern is obtained by combining individual costs of code fragments. Our technique is able to detect dependencies between different pieces of code and hence to compute a precise upper bounds for a given program. A prototype has been implemented and evaluated to demonstrate the effectiveness of the approach.
For the entire collection see [Zbl 1327.68021].Resource protection using atomics. Patterns and verification.https://www.zbmath.org/1453.680422021-02-27T13:50:00+00:00"Amighi, Afshin"https://www.zbmath.org/authors/?q=ai:amighi.afshin"Blom, Stefan"https://www.zbmath.org/authors/?q=ai:blom.stefan"Huisman, Marieke"https://www.zbmath.org/authors/?q=ai:huisman.mariekeSummary: For the verification of concurrent programs, it is essential to be able to show that synchronisation mechanisms are implemented correctly. A common way to implement such sychronisers is by using atomic operations. This paper identifies what different synchronisation patterns can be implemented by using atomic read, write and compare-and-set operation. Additionally, this paper proposes also a specification of these operations in Java's \texttt{AtomicInteger} class, and shows how different synchronisation mechanisms can be built and verified using atomic integer as the synchronisation primitive.{
}The specifications for the methods in the \texttt{AtomicInteger} class are derived from the classical concurrent separation logic rule for atomic operations. A main characteristic of our specification is its ease of use. To verify an implementation of a synchronisation mechanism, the user only has to specify (1) what are the different roles of the threads participating in the synchronisation, (2) what are the legal state transitions in the synchroniser, and (3) what share of the resource invariant can be obtained in a certain state, given the role of the current thread. The approach is illustrated on several synchronisation mechanisms. For all implementations, we provide a machine-checked proof that the implementations correctly implement the synchroniser.
For the entire collection see [Zbl 1327.68021].A multistep look-ahead deadlock avoidance policy for automated manufacturing systems.https://www.zbmath.org/1453.900662021-02-27T13:50:00+00:00"Gu, Chao"https://www.zbmath.org/authors/?q=ai:gu.chao"Li, Zhiwu"https://www.zbmath.org/authors/?q=ai:li.zhiwu"Al-Ahmari, Abdulrahman"https://www.zbmath.org/authors/?q=ai:al-ahmari.abdulrahman-mSummary: For an automated manufacturing system (AMS), it is a computationally intractable problem to find a maximally permissive deadlock avoidance policy (DAP) in a general case, since the decision on the safety of a reachable state is NP-hard. This paper focuses on the deadlock avoidance problem for systems of simple sequential processes with resources (S\(^3\)PR) by using Petri nets structural analysis theory. Inspired by the one-step look-ahead DAP that is an established result, which is of polynomial complexity, for an S\(^3\)PR without one-unit-capacity resources shared by two or more resource-transition circuits (in the Petri net model) that do not include each other, this research explores a multiple-step look-ahead deadlock avoidance policy for a system modeled with an S\(^3\)PR that contains a shared one-unit-capacity resource in resource-transition circuits. It is shown that the development of an optimal DAP for the considered class of Petri nets is also of polynomial complexity. It is indicated that the steps
needed to look ahead in a DAP depend on the structure of the net model. A number of examples are used to illustrate the proposed method.Functional pearl: nearest shelters in Manhattan.https://www.zbmath.org/1453.680382021-02-27T13:50:00+00:00"Mu, Shin-Cheng"https://www.zbmath.org/authors/?q=ai:mu.shin-cheng"Chen, Ting-Wei"https://www.zbmath.org/authors/?q=ai:chen.tingweiSummary: Godzilla is attacking New York, and your task is to choose, for each shelter in the city, a nearest shelter to evacuate to. Luckily, distance between shelters is measured by Manhattan length, which allows us to complete the task in \(O (n \log n)\) time. We present two algorithms: an algorithmic solution that solves the problem by a list-homomorphism, and a data structure based solution that exploits a ``thinning'' property.
For the entire collection see [Zbl 1327.68021].A formalized proof of strong normalization for guarded recursive types.https://www.zbmath.org/1453.680352021-02-27T13:50:00+00:00"Abel, Andreas"https://www.zbmath.org/authors/?q=ai:abel.andreas-m"Vezzosi, Andrea"https://www.zbmath.org/authors/?q=ai:vezzosi.andreaSummary: We consider a simplified version of Nakano's guarded fixed-point types in a representation by infinite type expressions, defined coinductively. Smallstep reduction is parametrized by a natural number ``depth'' that expresses under how many guards we may step during evaluation. We prove that reduction is strongly normalizing for any depth. The proof involves a typed inductive notion of strong normalization and a Kripke model of types in two dimensions: depth and typing context. Our results have been formalized in Agda and serve as a case study of reasoning about a language with coinductive type expressions.
For the entire collection see [Zbl 1327.68021].A translation of intersection and union types for the \(\lambda \mu\)-calculus.https://www.zbmath.org/1453.680372021-02-27T13:50:00+00:00"Kikuchi, Kentaro"https://www.zbmath.org/authors/?q=ai:kikuchi.kentaro"Sakurai, Takafumi"https://www.zbmath.org/authors/?q=ai:sakurai.takafumiSummary: We introduce an intersection and union type system for the \(\lambda \mu\)-calculus, which includes a restricted version of the traditional union-elimination rule. We give a translation from intersection and union types into intersection and product types, which is a variant of negative translation from classical logic to intuitionistic logic and naturally reflects the structure of strict intersection and union types. It is shown that a derivation in our type system can be translated into a derivation in the type system of van Bakel, Barbanera and de'Liguoro. As a corollary, the terms typable in our system turn out to be strongly normalising. We also present an intersection and union type system in the style of sequent calculus, and show that the terms typable in the system coincide with the strongly normalising terms of the \(\overline{\lambda} \mu\)-calculus, a call-by-name fragment of Curien and Herbelin's \(\overline{\lambda} \mu \widetilde{\mu}\)-calculus.
For the entire collection see [Zbl 1327.68021].Types for flexible objects.https://www.zbmath.org/1453.680282021-02-27T13:50:00+00:00"Palmer, Zachary"https://www.zbmath.org/authors/?q=ai:palmer.zachary"Menon, Pottayil Harisanker"https://www.zbmath.org/authors/?q=ai:menon.pottayil-harisanker"Rozenshteyn, Alexander"https://www.zbmath.org/authors/?q=ai:rozenshteyn.alexander"Smith, Scott"https://www.zbmath.org/authors/?q=ai:smith.scott-fSummary: Scripting languages are popular in part due to their extremely flexible objects. Features such as dynamic extension, mixins, and first-class messages improve programmability and lead to concise code. But attempts to statically type these features have met with limited success. Here we present TinyBang, a small typed language in which flexible object operations can be encoded. We illustrate this flexibility by solving an open problem in OO literature: we give an encoding where objects can be extended after being messaged without compromising the expressiveness of subtyping. TinyBang's subtype constraint system ensures that all types are completely inferred; there are no data declarations or type annotations. We formalize TinyBang and prove the type system is sound and decidable; all examples in the paper run in our most recent implementation.
For the entire collection see [Zbl 1327.68021].Maximizing the conditional expected reward for reaching the goal.https://www.zbmath.org/1453.901802021-02-27T13:50:00+00:00"Baier, Christel"https://www.zbmath.org/authors/?q=ai:baier.christel"Klein, Joachim"https://www.zbmath.org/authors/?q=ai:klein.joachim"Klüppelholz, Sascha"https://www.zbmath.org/authors/?q=ai:kluppelholz.sascha"Wunderlich, Sascha"https://www.zbmath.org/authors/?q=ai:wunderlich.saschaSummary: The paper addresses the problem of computing maximal conditional expected accumulated rewards until reaching a target state (briefly called maximal conditional expectations) in finite-state Markov decision processes where the condition is given as a reachability constraint. Conditional expectations of this type can, e.g., stand for the maximal expected termination time of probabilistic programs with non-determinism, under the condition that the program eventually terminates, or for the worst-case expected penalty to be paid, assuming that at least three deadlines are missed. The main results of the paper are (i) a polynomial-time algorithm to check the finiteness of maximal conditional expectations, (ii) PSPACE-completeness for the threshold problem in acyclic Markov decision processes where the task is to check whether the maximal conditional expectation exceeds a given threshold, (iii) a pseudo-polynomial-time algorithm for the threshold problem in the general (cyclic) case, and (iv) an exponential-time algorithm for computing the maximal conditional expectation and an optimal scheduler.
For the entire collection see [Zbl 1360.68016].Automatic memory management based on program transformation using ownership.https://www.zbmath.org/1453.680302021-02-27T13:50:00+00:00"Sonobe, Tatsuya"https://www.zbmath.org/authors/?q=ai:sonobe.tatsuya"Suenaga, Kohei"https://www.zbmath.org/authors/?q=ai:suenaga.kohei"Igarashi, Atsushi"https://www.zbmath.org/authors/?q=ai:igarashi.atsushiSummary: We present a type-based program transformation for an imperative programming language with manual memory-management primitives (e.g., \texttt{{malloc}} and \texttt{{free}} in C). Our algorithm, given a program with potential memory leaks, inserts memory-deallocating instructions to the program so that the resulting program does not contain memory leaks. We design the algorithm as type reconstruction for an extension of the ownership-based type system by Suenaga and Kobayashi.
For the entire collection see [Zbl 1327.68021].Logic programming and logarithmic space.https://www.zbmath.org/1453.680332021-02-27T13:50:00+00:00"Aubert, Clément"https://www.zbmath.org/authors/?q=ai:aubert.clement"Bagnol, Marc"https://www.zbmath.org/authors/?q=ai:bagnol.marc"Pistone, Paolo"https://www.zbmath.org/authors/?q=ai:pistone.paolo"Seiller, Thomas"https://www.zbmath.org/authors/?q=ai:seiller.thomasSummary: We present an algebraic view on logic programming, related to proof theory and more specifically linear logic and geometry of interaction. Within this construction, a characterization of logspace (deterministic and non-deterministic) computation is given via a syntactic restriction, using an encoding of words that derives from proof theory.{
}We show that the acceptance of a word by an observation (the counterpart of a program in the encoding) can be decided within logarithmic space, by reducing this problem to the acyclicity of a graph. We show moreover that observations are as expressive as two-ways multihead finite automata, a kind of pointer machine that is a standard model of logarithmic space computation.
For the entire collection see [Zbl 1327.68021].Robust linear neural network for constrained quadratic optimization.https://www.zbmath.org/1453.920172021-02-27T13:50:00+00:00"Liu, Zixin"https://www.zbmath.org/authors/?q=ai:liu.zixin"Liu, Yuanan"https://www.zbmath.org/authors/?q=ai:liu.yuanan"Xiong, Lianglin"https://www.zbmath.org/authors/?q=ai:xiong.lianglinSummary: Based on the feature of projection operator under box constraint, by using convex analysis method, this paper proposed three robust linear systems to solve a class of quadratic optimization problems. Utilizing linear matrix inequality (LMI) technique, eigenvalue perturbation theory, Lyapunov-Razumikhin method, and LaSalle's invariance principle, some stable criteria for the related models are also established. Compared with previous criteria derived in the literature cited herein, the stable criteria established in this paper are less conservative and more practicable. Finally, a numerical simulation example and an application example in compressed sensing problem are also given to illustrate the validity of the criteria established in this paper.NetKAT -- a formal system for the verification of networks.https://www.zbmath.org/1453.680512021-02-27T13:50:00+00:00"Kozen, Dexter"https://www.zbmath.org/authors/?q=ai:kozen.dexter-cSummary: This paper presents a survey of recent work in the development of NetKAT, a formal system for reasoning about packet switching networks, and its role in the emerging area of software-defined networking.
For the entire collection see [Zbl 1327.68021].Uncertainty measures for interval set information tables based on interval \(\delta \)-similarity relation.https://www.zbmath.org/1453.681882021-02-27T13:50:00+00:00"Zhang, Yimeng"https://www.zbmath.org/authors/?q=ai:zhang.yimeng"Jia, Xiuyi"https://www.zbmath.org/authors/?q=ai:jia.xiuyi"Tang, Zhenmin"https://www.zbmath.org/authors/?q=ai:tang.zhenmin"Long, Xianzhong"https://www.zbmath.org/authors/?q=ai:long.xianzhongSummary: The notion of uncertainty measure is one of the most important topics in rough set theory and has been studied in different kinds of information tables. However, few studies have focused on the interval set information table, which is regarded as one of the generalized models of single-valued information tables. This paper aims at studying the uncertainty measurements for interval set information tables. Firstly, an interval \(\delta \)-similarity relation is induced based on the similarity degree. The similarity relation induces the granules, which form a covering in interval set information tables. Secondly, four types of granularity measures are defined to measure the granularity of a covering. Thirdly, the concepts of accuracy and roughness in rough set theory are respectively extended to \(\delta \)-accuracy and \(\delta \)-roughness for interval set information tables. Furthermore, four new combinations ofuncertainty measures by considering proposed granularity measures and \(\delta \)-accuracy and \(\delta \)-roughness are defined and analyzed. Theoretical analyses and experimental results illustrate that the proposed measures are effective and accurate for interval set information tables.Block basis factorization for scalable kernel evaluation.https://www.zbmath.org/1453.650912021-02-27T13:50:00+00:00"Wang, Ruoxi"https://www.zbmath.org/authors/?q=ai:wang.ruoxi"Li, Yingzhou"https://www.zbmath.org/authors/?q=ai:li.yingzhou"Mahoney, Michael W."https://www.zbmath.org/authors/?q=ai:mahoney.michael-w"Darve, Eric"https://www.zbmath.org/authors/?q=ai:darve.ericPassivity-based distributed strategies for stochastic Stackelberg security games.https://www.zbmath.org/1453.910192021-02-27T13:50:00+00:00"Lee, Phillip"https://www.zbmath.org/authors/?q=ai:lee.phillip"Clark, Andrew"https://www.zbmath.org/authors/?q=ai:clark.andrew-g|clark.andrew-k|clark.andrew-j"Alomair, Basel"https://www.zbmath.org/authors/?q=ai:alomair.basel"Bushnell, Linda"https://www.zbmath.org/authors/?q=ai:bushnell.linda-g"Poovendran, Radha"https://www.zbmath.org/authors/?q=ai:poovendran.radhaSummary: Stackelberg security games (SSGs) model scenarios where a defender implements a randomized security policy, while an attacker observes the policy and selects an optimal attack strategy. Applications of SSG include critical infrastructure protection and dynamic defense of computer networks. Current work focuses on centralized algorithms for computing stochastic, mixed-strategy equilibria and translating those equilibria into security policies, which correspond to deciding which subset of targets (e.g., infrastructure components or network nodes) are defended at each time step. In this paper, we develop distributed strategies for multiple, resource-constrained agents to achieve the same equilibrium utility as these centralized policies. Under our approach, each agent moves from defending its current target to defending a new target with a precomputed rate, provided that the current target is not defended by any other agent. We analyze this strategy via a passivity-based approach and formulate sufficient conditions for the probability distribution of the set of defended targets to converge to a Stackelberg equilibrium. We then derive bounds on the deviation between the utility of the system prior to convergence and the optimal Stackelberg equilibrium utility, and show that this deviation is determined by the convergence rate of the distributed dynamics. We formulate the problem of selecting a minimum-mobility security policy to achieve a desired convergence rate, as well as the problem of maximizing the convergence rate subject to mobility constraints, and prove that both formulations are convex. Our approach is illustrated and compared to an existing integer programming-based centralized technique through a numerical study.
For the entire collection see [Zbl 1325.68020].An intelligent IoT-based wearable health monitoring system.https://www.zbmath.org/1453.921622021-02-27T13:50:00+00:00"Kassem, Ahmed"https://www.zbmath.org/authors/?q=ai:kassem.ahmed-m"Tamazin, Mohamed"https://www.zbmath.org/authors/?q=ai:tamazin.mohamed"Aly, Moustafa H."https://www.zbmath.org/authors/?q=ai:aly.moustafa-hSummary: Due to the increasing usage of wireless technologies and the miniaturization of electronic sensors, progress in wearable health monitoring technologies has been improved drastically, with strong potential to alter the future of healthcare services by using Internet of Things (IoT) active health monitoring sensors for omnipresent monitoring of patients and athletes through their regular daily routines. Medical applications such as remote monitoring, biofeedback and telemedicine create an entirely new base of medical quality and cost management. The objective of this work is to develop a low-cost, high-quality multipurpose wearable smart system for healthcare monitoring of patients with heart diseases and fitness athletes. In this chapter, we discuss the three phases of our proposed system. In the first phase, we use the Raspberry-Pi as an open-source microcontroller with a HealthyPi hat acting as a medium between the Raspberry-Pi and the biomedical sensors connected to the HealthyPi hat, with various parameters such as temperature, ECG, heartbeat, and oximetry. We began our experiment using 15 test subjects with different genders, age and fitness level. We placed the proposed wearable device and collected the readings' data for each test subject while resting, walking and running. The second phase is connecting our system to an open-source IoT platform to represent the data through a graphical IoT dashboard to be viewed by doctors remotely, as well as implementing action rules that send alarms to patient and doctor in case of problem detection. In the third phase, we designed and tested a fuzzy logic system that inputs the accelerometer, gyroscope, heart rate and blood oxygen level data collected from the experiments and provides the physical state (resting, walking or running) as output, which helps in determining the health status of the patient/athlete. The obtained results of the proposed method show a successful remote health status monitoring of test subjects through the IoT dashboard in real time and detection of abnormalities in their health status, as well as efficient detecting of the physical motion mode using the proposed fuzzy logic system design.
For the entire collection see [Zbl 1451.74012].Algebraic dependence in generating functions and expansion complexity.https://www.zbmath.org/1453.940522021-02-27T13:50:00+00:00"Gómez-Pérez, Domingo"https://www.zbmath.org/authors/?q=ai:gomez-perez.domingo"Mérai, László"https://www.zbmath.org/authors/?q=ai:merai.laszlo\textit{C. Diem} [Algebra Number Theory 7, No. 6, 1281--1323 (2013; Zbl 1300.11132)] introduced a measure of pseudorandomness for cryptographic sequences called expansion complexity which was further analyzed by the second author et al. [Cryptogr. Commun. 9, No. 4, 501--509 (2017; Zbl 1409.94855)] and the authors et al. [IEEE Trans. Inf. Theory 64, No. 6, 4228--4232 (2018; Zbl 1395.94262)].
First the authors express the expansion complexity in terms of the Gröbner basis of its defining ideal. Then they present a corrected form of [the second author et al., loc. cit., Theorem 4] on the typical value of the expansion complexity. Finally, they study the expansion complexity of sequences defined by differential equations including the explicit inversive generator.
Reviewer: Arne Winterhof (Linz)Modelling of output flows in queuing systems and networks.https://www.zbmath.org/1453.601522021-02-27T13:50:00+00:00"Tsitsiashvili, Gurami"https://www.zbmath.org/authors/?q=ai:tsitsiashvili.gurami-sh|tsitsiashvili.gurami-g"Osipova, Marina"https://www.zbmath.org/authors/?q=ai:osipova.marina-aSummary: A simplification of Burke theorem proof [\textit{P. J. Burke}, Oper. Res. 4, No. 6, 699--704 (1956; Zbl 1414.90097)] and its generalizations for queuing systems and networks are considered. The proof simplification is based on the fact that points in output flow take place in moments when Markov process of customers number in queuing system has jumps down. First steps in this direction were made in [the authors, ``Generalization and extension of Burke theorem'', Reliab. Theor. Appl. 13, No. 1, 59--62 (2018)]. But here we improved proves of main results and consider queuing systems in random environment. In such way it is possible to obtain a property of the mutual independence of the flow into disjoint periods of time and to calculate intensity of output flow. In this case Poisson input flow with randomly varying intensity may be represented as Poisson flow with average intensity also. If this flow is independent with service process then it is possible to simplify significantly consideration of queuing systems in random environment. These assumptions may be applied to a consideration of multiphase type networks [the first author et al., ``Jackson network as network of multiphase type'', Int. Math. Forum 12, 303--310 (2017; \url{doi:10.12988/imf.2017.716})] which are convenient in analysis of queuing models with retrial queues [\textit{S. Moiseeva} and \textit{L. Zadiranova}, Commun. Comput. Inf. Sci. 601, 370--377 (2016; Zbl 1390.90246); \textit{S. P. Moiseeva} and \textit{I. A. Zakhorol'naya}, ``Mathematical model of parallel retrial queueing of multiple requests'', Optoelec. Instrum. Data Process. 47, No. 6, 567--572 (2011; \url{doi:10.3103/s8756699011060276}); \textit{A. Moiseev} and \textit{A. Nazarov}, Eur. J. Oper. Res. 254, No. 1, 161--168 (2016; Zbl 1346.90259); \textit{R. D. Nobel} and \textit{H. C. Tijms}, ibid. 113, No. 3, 610--619 (1999; Zbl 0947.90028); ``Waiting-time probabilities in the \(M/G/1\) retrial queue'', Stat. Neerlandica 60, No. 1, 73--78 (2006; \url{doi:10.1111/j.1467-9574.2006.00312.x})].
For the entire collection see [Zbl 1403.60002].An institution for Object-Z with inheritance and polymorphism.https://www.zbmath.org/1453.680432021-02-27T13:50:00+00:00"Baumeister, Hubert"https://www.zbmath.org/authors/?q=ai:baumeister.hubert"Bettaz, Mohamed"https://www.zbmath.org/authors/?q=ai:bettaz.mohamed"Maouche, Mourad"https://www.zbmath.org/authors/?q=ai:maouche.mourad"Mosteghanemi, M'hamed"https://www.zbmath.org/authors/?q=ai:mosteghanemi.mhamedSummary: Large software systems are best specified using a multi-paradigm approach. Depending on which aspects of a system one wants to model, some logic formalisms are better suited than others. The theory of institutions and (co)morphisms between institutions provides a general framework for describing logical systems and their connections. This is the foundation of multi-modelling languages allowing one to deal with heterogeneous specifications in a consistent way. To make Object-Z accessible as part of such a multi-modelling language, we define the institution OZS for Object-Z. We have chosen Object-Z in part because it is a prominent software modelling language and in part because it allows us to study the formalisation of object-oriented concepts, like object identity, object state, dynamic behaviour, polymorphic sorts and inheritance.
For the entire collection see [Zbl 1312.68005].Dynamic conflict-free colorings in the plane.https://www.zbmath.org/1453.682062021-02-27T13:50:00+00:00"de Berg, Mark"https://www.zbmath.org/authors/?q=ai:de-berg.mark-t"Markovic, Aleksandar"https://www.zbmath.org/authors/?q=ai:markovic.aleksandarSummary: We study dynamic conflict-free colorings in the plane, where the goal is to maintain a conflict-free coloring (CF-coloring for short) under insertions and deletions.{\parindent=10mm \begin{itemize}\item[{\(\bullet\)}] First we consider CF-colorings of a set \(\mathcal{S}\) of unit squares with respect to points. Our method maintains a CF-coloring that uses \(O(\log n)\) colors at any time, where \(n\) is the current number of squares in \(\mathcal{S}\), at the cost of only \(O(\log n)\) recolorings per insertion or deletion of a square. We generalize the method to rectangles whose sides have lengths in the range \([1, c]\), where \(c\) is a fixed constant. Here the number of colors used becomes \(O(\log^2 n)\). The method also extends to arbitrary rectangles whose coordinates come from a fixed universe of size \(N\), yielding \(O(\log^2 N \log^2 n)\) colors. The number of recolorings for both methods stays in \(O(\log n)\).\item [{\(\bullet\)}] We then present a general framework to maintain a CF-coloring under insertions for sets of objects that admit a unimax coloring with a small number of colors in the static case. As an application we show how to maintain a CF-coloring with \(O(\log^3 n)\) colors for disks (or other objects with linear union complexity) with respect to points at the cost of \(O(\log n)\) recolorings per insertion. We extend the framework to the fully-dynamic case when the static unimax coloring admits weak deletions. As an application we show how to maintain a CF-coloring with \(O(\sqrt{n} \log^2 n)\) colors for points with respect to rectangles, at the cost of \(O(\log n)\) recolorings per insertion and \(O(1)\) recolorings per deletion.
\end{itemize}} These are the first results on fully-dynamic CF-colorings in the plane, and the first results for semi-dynamic CF-colorings for non-congruent objects.Geometric clustering in normed planes.https://www.zbmath.org/1453.682082021-02-27T13:50:00+00:00"Martín, Pedro"https://www.zbmath.org/authors/?q=ai:martin.pedro"Yáñez, Diego"https://www.zbmath.org/authors/?q=ai:yanez.diegoSummary: Given two sets of points \(A\) and \(B\) in a normed plane, we prove that there are two linearly separable sets \(A^\prime\) and \(B^\prime\) such that \(\operatorname{diam}(A^\prime) \leq \operatorname{diam}(A)\), \(\operatorname{diam}(B^\prime) \leq \operatorname{diam}(B)\), and \(A^\prime \cup B^\prime = A \cup B\). As a result, for a given \(k\), some Euclidean \(k\)-clustering algorithms are adapted to normed planes, for instance, those that minimize the maximum, the sum, or the sum of squares of the \(k\) cluster diameters. The 2-clustering problem is studied when two different bounds are imposed to the diameters. The Hershberger-Suri's data structure for managing ball hulls can be useful in this context.New pruning rules for the Steiner tree problem and 2-connected Steiner network problem.https://www.zbmath.org/1453.682012021-02-27T13:50:00+00:00"Brazil, Marcus"https://www.zbmath.org/authors/?q=ai:brazil.marcus-n"Volz, Marcus"https://www.zbmath.org/authors/?q=ai:volz.marcus"Zachariasen, Martin"https://www.zbmath.org/authors/?q=ai:zachariasen.martin"Ras, Charl"https://www.zbmath.org/authors/?q=ai:ras.charl-j"Thomas, Doreen"https://www.zbmath.org/authors/?q=ai:thomas.doreen-anneSummary: We introduce the concepts of \(k\)-lunes and \(k\)-lune inequalities, which form the basis for new geometric pruning rules for limiting the number of candidate full components that need to be considered when solving the Euclidean Steiner tree problem or the Euclidean 2-connected Steiner network problem. For the latter problem, these new pruning rules constitute the first empty region properties to have been developed for the problem. We show how to implement these rules efficiently and run computational experiments, indicating the extent to which they can improve the performance of state-of-the-art algorithms for these problems.FO model checking on geometric graphs.https://www.zbmath.org/1453.681022021-02-27T13:50:00+00:00"Hliněný, Petr"https://www.zbmath.org/authors/?q=ai:hlineny.petr"Pokrývka, Filip"https://www.zbmath.org/authors/?q=ai:pokryvka.filip"Roy, Bodhayan"https://www.zbmath.org/authors/?q=ai:roy.bodhayanSummary: Over the past two decades the main focus of research into first-order (FO) model checking algorithms has been on sparse relational structures -- culminating in the FPT algorithm by Grohe, Kreutzer and Siebertz for FO model checking on nowhere dense classes of graphs. On contrary to that, except the case of locally bounded clique-width only little is currently known about FO model checking on dense classes of graphs or other structures. We study the FO model checking problem on dense graph classes definable by geometric means (intersection and visibility graphs). We obtain new nontrivial FPT results, e.g., for restricted subclasses of circular-arc, circle, box, disk, and polygon-visibility graphs. These results use the FPT algorithm by Gajarský et al. for FO model checking on posets of bounded width. We also complement the tractability results by related hardness reductions.Finding a Hamilton cycle fast on average using rotations and extensions.https://www.zbmath.org/1453.681182021-02-27T13:50:00+00:00"Alon, Yahav"https://www.zbmath.org/authors/?q=ai:alon.yahav"Krivelevich, Michael"https://www.zbmath.org/authors/?q=ai:krivelevich.michaelSummary: We present an algorithm \textit{CRE}, which either finds a Hamilton cycle in a graph \(G\) or determines that there is no such cycle in the graph. The algorithm's expected running time over input distribution \(G \sim G(n,p)\) is \((1+o(1))n/p\), the optimal possible expected time, for \(p=p(n) \ge 70n^{-\frac{1}{2}}\). This improves upon previous results on this problem due to
\textit{Y. Gurevich} and \textit{S. Shelah} [SIAM J. Comput. 16, 486--502 (1987; Zbl 0654.68083)],
and to \textit{A. Thomason} [Discrete Math. 75, No. 1--3, 373--379 (1989; Zbl 0681.05051)].Analysis of Legendre polynomial kernel in support vector machines.https://www.zbmath.org/1453.681492021-02-27T13:50:00+00:00"Djelloul, Naima"https://www.zbmath.org/authors/?q=ai:djelloul.naima"Amir, Abdessamad"https://www.zbmath.org/authors/?q=ai:amir.abdessamadSummary: For several types of machines learning problems, the support vector machine is a method of choice. The kernel functions are a basic ingredient in support vector machine theory. Kernels based on the concepts of orthogonal polynomials gave the great satisfaction in practice. In this paper we identify the reproducing kernel Hilbert space of Legendre polynomial kernel which allows us to understand its ability to extract more discriminative features. We also show that without being a universal kernel, Legendre kernel possesses the same separation properties. The Legendre, Gaussian and polynomial kernel performance has been first evaluated on two dimensional illustrative examples in order to give a graphical comparison, then on real world data sets from UCI repository. For nonlinearly separable data, Legendre kernel always gives satisfaction regarding classification accuracy and reduction in the number of support vectors.An efficient sum query algorithm for distance-based locally dominating functions.https://www.zbmath.org/1453.680642021-02-27T13:50:00+00:00"Huang, Ziyun"https://www.zbmath.org/authors/?q=ai:huang.ziyun"Xu, Jinhui"https://www.zbmath.org/authors/?q=ai:xu.jinhuiSummary: In this paper, we consider the following sum query problem: Given a point set \(P\) in \(\mathbb{R}^d\), and a distance-based function \(f(p,q)\) (i.e. a function of the distance between \(p\) and \(q)\) satisfying some general properties, the goal is to develop a data structure and a query algorithm for efficiently computing a \((1+\varepsilon)\)-approximate solution to the sum \(\sum_{p\in P}f(p,q)\) for any query point \(q\in\mathbb{R}^d\) and any small constant \(\varepsilon>0\). Existing techniques for this problem are mainly based on some core-set techniques which often have difficulties to deal with functions with local domination property. Based on several new insights to this problem, we develop in this paper a novel technique to overcome these encountered difficulties. Our algorithm is capable of answering queries with high success probability in time no more than \(\widetilde O_{\varepsilon,d}(n^{0.5+c})\), and the underlying data structure can be constructed in \(\widetilde O_{\varepsilon,d}(n^{1+c})\) time for any \(c>0\), where the hidden constant has only polynomial dependence on \(1/\varepsilon\) and \(d\). Our technique is simple and can be easily implemented for practical purpose.
For the entire collection see [Zbl 1376.68013].A unified approach for optimal release, patching and testing time of a software.https://www.zbmath.org/1453.680582021-02-27T13:50:00+00:00"Kapur, P. K."https://www.zbmath.org/authors/?q=ai:kapur.p-k"Singh, Ompal"https://www.zbmath.org/authors/?q=ai:singh.ompal"Shrivastava, A. K."https://www.zbmath.org/authors/?q=ai:shrivastava.anil-kumar|shrivastava.abhishek-kSummary: The ever growing consumer's expectations for innovative and reliable software products within no time have driven developers to reframe the whole development process accordingly. To sustain in the stiff market competition, nowadays software firms are releasing their product early and continue testing to fix the remaining number of bugs in the operational phase by providing patches. A patch is a small program to fix the bugs. But continuing testing to provide patches for better product experience to users is also not economical from the firm's point of view. In the existing software reliability literature, few researchers have worked on to determine the optimal release and testing stop time of a software, but no work has been done to determine the optimal time for releasing, patching and to stop testing the software. In this paper, we present a unified framework of optimal policy to determine optimal software release time, patching time and testing stop time, to minimise overall testing cost. The proposed cost model is validated on a real life software failure dataset of tandem computers.Permutations of \(\mathbb{N}\) generated by left-right filling algorithms.https://www.zbmath.org/1453.110392021-02-27T13:50:00+00:00"Dekking, F. M."https://www.zbmath.org/authors/?q=ai:dekking.michelKimberling introduced a permutation of \(\mathbb{N}\) obtained by a left-right filling procedure defined by \(\Pi(1)=1\) and for \(n\ge 2\), \(\Pi(n-L(n)):=n\) if \(\Pi(n-L(n))\) is not yet defined, and, otherwise \(\Pi(n+R(n)):=n\); where \(L,R: \mathbb{N}\to\mathbb{N}\) are two functions. For \(L(n)=R(n)=\lfloor n/2\rfloor\), the author provides a one-to-one connection of the corresponding permutation with a 3-automatic sequence which permits to provide many properties for this permutation. He also gives a framework to analyze other left-right filling algorithms establishing many relations between the corresponding permutations.
Reviewer: Michel Rigo (Liège)Non-depth-first search against independent distributions on an AND-OR tree.https://www.zbmath.org/1453.681742021-02-27T13:50:00+00:00"Suzuki, Toshio"https://www.zbmath.org/authors/?q=ai:suzuki.toshioSummary: The first author and \textit{Y. Niida} [Ann. Pure Appl. Logic 166, No. 11, 1150--1164 (2015; Zbl 1335.68244)]
showed the following results on independent distributions (IDs) on an AND-OR tree, where they took only depth-first algorithms into consideration. (1) Suppose that a positive real number \(r<1\) is given, and let \(I(r)\) denote the class of all IDs such that the probability of the root having value 0 is \(r\); if, among members of \(I(r)\), \(d\) is a maximizer of cost of the best algorithm then \(d\) is an independent and identical distribution (IID). (2) The same as above holds for the set of all IDs in place of \(I(r)\). In the case where non-depth-first algorithms are taken into consideration, the counterparts of (1) and (2) are left open in the above work.
\textit{W. Peng} et al. [Inf. Process. Lett. 125, 41--45 (2017; Zbl 1409.68272)]
extended (1) and (2) to multi-branching trees, where in (2) they put an additional hypothesis on ID \(d\) that the probability of the root having value 0 is neither 0 nor 1. We give positive answers for the two questions of Suzuki-Niida. A key to the proof is that if ID \(d\) achieves the equilibrium among IDs then \(d\) has an optimal algorithm that is depth-first. In addition, we extend theorem 3 of Peng et al. to the case where non-depth-first algorithms are taken into consideration.Fast secure comparison for medium-sized integers and its application in binarized neural networks.https://www.zbmath.org/1453.941282021-02-27T13:50:00+00:00"Abspoel, Mark"https://www.zbmath.org/authors/?q=ai:abspoel.mark"Bouman, Niek J."https://www.zbmath.org/authors/?q=ai:bouman.niek-j"Schoenmakers, Berry"https://www.zbmath.org/authors/?q=ai:schoenmakers.berry"de Vreede, Niels"https://www.zbmath.org/authors/?q=ai:de-vreede.nielsSummary: In 1994, Feige, Kilian, and Naor [\textit{U. Feige} et al., STOC 1994, New York, NY: ACM, 554--563 (1994; Zbl 1344.68030)] proposed a simple protocol for secure 3-way comparison of integers \(a\) and \(b\) from the range [0, 2]. Their observation is that for \(p=7\), the Legendre symbol \(\left(x\mid p\right)\) coincides with the sign of \(x\) for \(x=a-b\in[-2,2]\), thus reducing secure comparison to secure evaluation of the Legendre symbol. More recently, in 2011, \textit{C. H. Yu} [Sign modules in secure arithmetic circuits. Cryptology ePrint Archive, Report 2011/539 (2011), \url{http://eprint.iacr.org/2011/539}] generalized this idea to handle secure comparisons for integers from substantially larger ranges \([0, d]\), essentially by searching for primes for which the Legendre symbol coincides with the sign function on \([-d,d]\). In this paper, we present new comparison protocols based on the Legendre symbol that additionally employ some form of error correction. We relax the prime search by requiring that the Legendre symbol encodes the sign function in a noisy fashion only. Practically, we use the majority vote over a window of \(2k+1\) adjacent Legendre symbols, for small positive integers \(k\). Our technique significantly increases the comparison range: e.g., for a modulus of 60 bits, \(d\) increases by a factor of 2.8 (for \(k=1)\) and 3.8 (for \(k=2)\) respectively. We give a practical method to find primes with suitable noisy encodings.
We demonstrate the practical relevance of our comparison protocol by applying it in a secure neural network classifier for the MNIST dataset. Concretely, we discuss a secure multiparty computation based on the binarized multi-layer perceptron of \textit{I. Hubara} et al. [J. Mach. Learn. Res. 18 (2017-2018), Paper No. 187, 30 p. (2018; Zbl 06982943)], using our comparison for the second and third layers.
For the entire collection see [Zbl 1409.94003].On the complexity of an inexact restoration method for constrained optimization.https://www.zbmath.org/1453.901622021-02-27T13:50:00+00:00"Bueno, Luís Felipe"https://www.zbmath.org/authors/?q=ai:bueno.luis-felipe"Martínez, José Mario"https://www.zbmath.org/authors/?q=ai:martinez.jose-marioProbability functions in the context of signed involutive meadows (extended abstract).https://www.zbmath.org/1453.681052021-02-27T13:50:00+00:00"Bergstra, Jan A."https://www.zbmath.org/authors/?q=ai:bergstra.jan-a"Ponse, Alban"https://www.zbmath.org/authors/?q=ai:ponse.albanSummary: The Kolmogorov axioms for probability functions are placed in the context of signed meadows. A completeness theorem is stated and proven for the resulting equational theory of probability calculus. Elementary definitions of probability theory are restated in this framework.
For the entire collection see [Zbl 1428.68025].Broad echo state network for multivariate time series prediction.https://www.zbmath.org/1453.626522021-02-27T13:50:00+00:00"Yao, Xianshuang"https://www.zbmath.org/authors/?q=ai:yao.xianshuang"Wang, Zhanshan"https://www.zbmath.org/authors/?q=ai:wang.zhanshanSummary: In this paper, a broad echo state network with multiple reservoirs in parallel configuration (broad-ESN) is proposed for a class of multivariate time series prediction. Firstly, through the unsupervised learning algorithm of restricted Boltzmann machine (RBM), the number of reservoirs of broad-ESN can be determined, such that the dynamic characteristics of a class of multivariate time series can be fully reflected. Secondly, a parameter optimization method based on Davidon-Fletcher-Powell (DFP) quasi-Newton algorithm is proposed to optimize the reservoir parameters of broad-ESN. Meanwhile, an output weights learning method based on output error is given to train the output weights of broad-ESN. Thirdly, a sufficient condition for the echo state property of broad-ESN is given. Finally, four examples are given to verify the effectiveness of broad-ESN.Decision making with spherical fuzzy sets. Theory and applications.https://www.zbmath.org/1453.680152021-02-27T13:50:00+00:00"Kahraman, Cengiz (ed.)"https://www.zbmath.org/authors/?q=ai:kahraman.cengiz"Gündoğdu, Fatma Kutlu (ed.)"https://www.zbmath.org/authors/?q=ai:gundogdu.fatma-kutluPublisher's description: This book introduces readers to the novel concept of spherical fuzzy sets, showing how these sets can be applied in practice to solve various decision-making problems. It also demonstrates that these sets provide a larger preference volume in 3D space for decision-makers. Written by authoritative researchers, the various chapters cover a large amount of theoretical and practical information, allowing readers to gain an extensive understanding of both the fundamentals and applications of spherical fuzzy sets in intelligent decision-making and mathematical programming.
The articles of mathematical interest will be reviewed individually.24th international conference on principles of distributed systems, OPODIS 2020, Strasbourg, France, virtual conference, December 14--16, 2020.https://www.zbmath.org/1453.680112021-02-27T13:50:00+00:00"Bramas, Quentin (ed.)"https://www.zbmath.org/authors/?q=ai:bramas.quentin"Oshman, Rotem (ed.)"https://www.zbmath.org/authors/?q=ai:oshman.rotem"Romano, Paolo (ed.)"https://www.zbmath.org/authors/?q=ai:romano.paoloThe articles of this volume will be reviewed individually. For the preceding conference see [Zbl 1434.68024].A boundary class for the \(k\)-path partition problem.https://www.zbmath.org/1453.681382021-02-27T13:50:00+00:00"Korpelainen, Nicholas"https://www.zbmath.org/authors/?q=ai:korpelainen.nicholasSummary: We establish the first known \textit{boundary class} for the \(k\)-path partition problem and deduce that for a graph class defined by finitely many minimal forbidden induced subgraphs, the \(k\)-path partition problem remains NP-hard unless one of the forbidden induced subgraphs is a subcubic tree (a tree of maximum degree at most 3) with at most one vertex of degree 3.
For the entire collection see [Zbl 1392.00002].29th EACSL annual conference on computer science logic, CSL 2021, Ljubljana, Slovenia, virtual conference, January 25--28, 2021.https://www.zbmath.org/1453.680092021-02-27T13:50:00+00:00"Baier, Christel (ed)"https://www.zbmath.org/authors/?q=ai:baier.christel"Goubault-Larrecq, Jean (ed.)"https://www.zbmath.org/authors/?q=ai:goubault-larrecq.jeanThe articles of this volume will be reviewed individually. For the preceding conference see [Zbl 1434.68025].Tighter bounds for online bipartite matching.https://www.zbmath.org/1453.682182021-02-27T13:50:00+00:00"Feige, Uriel"https://www.zbmath.org/authors/?q=ai:feige.urielSummary: We study the online bipartite matching problem, introduced by
\textit{R. M. Karp} et al. [``An optimal algorithm for on-line bipartite matching'', in: Proceedings of the 22nd annual ACM symposium on theory of computing, STOC'90. New York, NY: Association for Computing Machinery (ACM). 352--358 (1990; \url{doi:10.1145/100216.100262})].
For bipartite graphs with matchings of size \(n\), it is known that the \textit{Ranking} randomized algorithm matches at least \((1-\frac{1}{e})n\) edges in expectation. It is also known that no online algorithm matches more than \((1-\frac{1}{e})n+O(1)\) edges in expectation, when the input is chosen from a certain distribution that we refer to as \(D_n\). This upper bound also applies to \textit{fractional} matchings. We review the known proofs for this last statement. In passing we observe that the \(O(1)\) additive term (in the upper bound for fractional matching) is \(\frac{1}{2}-\frac{1}{2e}+O(\frac{1}{n})\), and that this term is tight: the online algorithm known as \textit{Balance} indeed produces a fractional matching of this size. We provide a new proof that exactly characterizes the expected cardinality of the (integral) matching produced by \textit{Ranking} when the input graph comes from the support of \(D_n\). This expectation turns out to be \((1-\frac{1}{e})n+1-\frac{2}{e}+O(\frac{1}{n!})\), and serves as an upper bound on the performance ratio of any online (integral) matching algorithm.
For the entire collection see [Zbl 1443.05002].A topologically complete theory of weaving.https://www.zbmath.org/1453.050692021-02-27T13:50:00+00:00"Akleman, Ergun"https://www.zbmath.org/authors/?q=ai:akleman.ergun"Chen, Jianer"https://www.zbmath.org/authors/?q=ai:chen.jian-er"Gross, Jonathan L."https://www.zbmath.org/authors/?q=ai:gross.jonathan-l"Hu, Shiyu"https://www.zbmath.org/authors/?q=ai:hu.shiyuTwo applications of monoid actions to cross-sections.https://www.zbmath.org/1453.200842021-02-27T13:50:00+00:00"Brough, Tara"https://www.zbmath.org/authors/?q=ai:brough.tara"Cain, Alan J."https://www.zbmath.org/authors/?q=ai:cain.alan-j"Maltcev, Victor"https://www.zbmath.org/authors/?q=ai:maltcev.victorLet \(M\) be a monoid generated by some (usually finite) set \(A\). Then there is, by the universal mapping property of free monoids, a unique homomorphism \(\varphi\colon A^* \to M\) such that \(\varphi(a) = a\) for all \(a \in A\). A language \(L \subseteq A^*\) is said to be a cross-section of \(M\) if precisely one word \(w \in L\) with \(\varphi(w) = m\) exists for each \(m \in M\). If a monoid \(M\) admits a prefix-closed regular cross-section, then it is said to be a Markov monoid [\textit{A. J. Cain} and \textit{V. Maltcev}, Int. J. Algebra Comput. 24, No. 5, 609--653 (2014; Zbl 1325.20055)].
The authors solve two problems involving cross-sections of monoids, the unifying theme being that both results utilise a simple construction of a monoid \(M[T]\) from an action of some other monoid \(M\) on a set \(T\). In their first main result, they describe a monoid, whose direct product with \(\mathbb{Z}\) is Markov, but which itself does not admit a regular cross-section. This implies that neither the class of all Markov monoids, nor the class of all monoids with regular cross-sections is closed under taking direct factors. The second main result of the article is concerned with full trios of languages [\textit{S. Ginsburg}, Algebraic and automata-theoretic properties of formal languages. Amsterdam, Oxford: North-Holland Publishing Company (1975; Zbl 0325.68002)], also known as rational cones [\textit{J. Berstel}, Transductions and context-free languages. Stuttgart: B. G. Teubner (1979; Zbl 0424.68040)]. The authors prove that if \(\mathscr{C},\mathscr{D}\) are full trios such that \(\mathscr{C}\) is not a subclass of \(\mathscr{D}\), then there exists a monoid, which admits a cross-section in \(\mathscr{C}\), but not a cross-section in \(\mathscr{D}\). This is applied to several classes of languages, which are known to form full trios.
Reviewer: Peter Kostolányi (Bratislava)What do Asian and non-Asian scriptures have in common? An applied statistical machine learning inquiry.https://www.zbmath.org/1453.628002021-02-27T13:50:00+00:00"Sah, Preeti"https://www.zbmath.org/authors/?q=ai:sah.preeti"Fokoué, Ernest"https://www.zbmath.org/authors/?q=ai:fokoue.ernest-parfaitSummary: This paper presents a substantially detailed statistical machine learning approach to the analysis of several aspects of sacred texts from both the Asian and Biblical scriptural canons. The corpus herein considered consists of 4 Asian sacred scriptures, namely the Tao Te Ching, the teachings of the Buddha, the Yogasutras of Patanjali, and the Upanishads, and 4 non-Asian sacred texts essentially four books from the Bible, narnely the Book of Proverbs, the Book of Wisdom, the Book of Ecclesiastes and the Book of Ecclesiasticus. Standard text mining tools are used, like the creation of Document Term Matrices (DTM) to pre-process raw English translations into word frequencies, and both unsupervised and supervised learning methods are used to answer some foundational questions featuring similarities and dissimilarities within each canon and interesting differences between all the canons considered. Despite the vast disparities between the translators of the original texts, our findings reveal sharp differences between Asian and non Asian scriptures regardless of whether clustering techniques or pattern recognition methods are used. We provide several compelling visualizations to help highlight our striking findings, chief of which are the persistent groupings of the scriptures based on geography.Optimal thresholds for anomaly-based intrusion detection in dynamical environments.https://www.zbmath.org/1453.910322021-02-27T13:50:00+00:00"Ghafouri, Amin"https://www.zbmath.org/authors/?q=ai:ghafouri.amin"Abbas, Waseem"https://www.zbmath.org/authors/?q=ai:abbas.waseem"Laszka, Aron"https://www.zbmath.org/authors/?q=ai:laszka.aron"Vorobeychik, Yevgeniy"https://www.zbmath.org/authors/?q=ai:vorobeychik.yevgeniy"Koutsoukos, Xenofon"https://www.zbmath.org/authors/?q=ai:koutsoukos.xenofon-dSummary: In cyber-physical systems, malicious and resourceful attackers could penetrate a system through cyber means and cause significant physical damage. Consequently, early detection of such attacks becomes integral towards making these systems resilient to attacks. To achieve this objective, intrusion detection systems (IDS) that are able to detect malicious behavior early enough can be deployed. However, practical IDS are imperfect and sometimes they may produce false alarms even for normal system behavior. Since alarms need to be investigated for any potential damage, a large number of false alarms may increase the operational costs significantly. Thus, IDS need to be configured properly, as oversensitive IDS could detect attacks very early but at the cost of a higher number of false alarms. Similarly, IDS with very low sensitivity could reduce the false alarms while increasing the time to detect the attacks. The configuration of IDS to strike the right balance between time to detecting attacks and the rate of false positives is a challenging task, especially in dynamic environments, in which the damage caused by a successful attack is time-varying.{
}In this paper, using a game-theoretic setup, we study the problem of finding optimal detection thresholds for anomaly-based detectors implemented in dynamical systems in the face of strategic attacks. We formulate the problem as an attacker-defender security game, and determine thresholds for the detector to achieve an optimal trade-off between the detection delay and the false positive rates. In this direction, we first provide an algorithm that computes an optimal fixed threshold that remains fixed throughout. Second, we allow the detector's threshold to change with time to further minimize the defender's loss, and we provide a polynomial-time algorithm to compute time-varying thresholds, which we call adaptive thresholds. Finally, we numerically evaluate our results using a water-distribution network as a case study.
For the entire collection see [Zbl 1347.68013].Generating specialized interpreters for modular structural operational semantics.https://www.zbmath.org/1453.680402021-02-27T13:50:00+00:00"Bach Poulsen, Casper"https://www.zbmath.org/authors/?q=ai:bach-poulsen.casper"Mosses, Peter D."https://www.zbmath.org/authors/?q=ai:mosses.peter-dSummary: Modular Structural Operational Semantics (MSOS) is a variant of Structural Operational Semantics (SOS). It allows language constructs to be specified independently, such that no reformulation of existing rules in an MSOS specification is required when a language is extended with new constructs and features.{
}Introducing the Prolog MSOS Tool, we recall how to synthesize executable interpreters from small-step MSOS specifications by compiling MSOS rules into Prolog clauses. Implementing the transitive closure of compiled small-step rules gives an executable interpreter in Prolog. In the worst case, such interpreters traverse each intermediate program term in its full depth, resulting in a significant overhead in each step.{
}We show how to transform small-step MSOS specifications into corresponding big-step specifications via a two-step specialization by internalizing the rules implementing the transitive closure in MSOS and `refocusing' the small-step rules. Specialized specifications result in generated interpreters with significantly reduced interpretive overhead.
For the entire collection see [Zbl 1320.68017].Compiling a functional logic language: \textit{The Fair Scheme}.https://www.zbmath.org/1453.680392021-02-27T13:50:00+00:00"Antoy, Sergio"https://www.zbmath.org/authors/?q=ai:antoy.sergio"Jost, Andy"https://www.zbmath.org/authors/?q=ai:jost.andySummary: We present a compilation scheme for a functional logic programming language. The input program to our compiler is a constructor-based graph rewriting system in a non-confluent, but well-behaved class. This input is an intermediate representation of a functional logic program in a language such as Curry or \(\mathcal{TOY}\). The output program from our compiler consists of three procedures that make recursive calls and execute both rewrite and pull-tab steps. This output is an intermediate representation that is easy to encode in any number of programming languages. We formally and tersely define the compilation scheme from input to output programs. This compilation scheme is the only one to date that implements a deterministic strategy for non-deterministic computations with a proof of optimality and correctness.
For the entire collection see [Zbl 1320.68017].A logical encoding of timed \(\pi\)-calculus.https://www.zbmath.org/1453.681132021-02-27T13:50:00+00:00"Saeedloei, Neda"https://www.zbmath.org/authors/?q=ai:saeedloei.nedaSummary: We develop a logical encoding of the operational semantics of timed \(\pi\)-calculus: a real-time extension of Milner's \(\pi\)-calculus. This executable encoding is based on \textit{Horn logical semantics} of programming languages and directly leads to an implementation for timed \(\pi\)-calculus. This implementation can be used for modeling and verification of real-time systems and cyber-physical.
For the entire collection see [Zbl 1320.68017].Extending co-logic programs for branching-time model checking.https://www.zbmath.org/1453.680342021-02-27T13:50:00+00:00"Seki, Hirohisa"https://www.zbmath.org/authors/?q=ai:seki.hirohisaSummary: Co-logic programming is a programming language allowing each predicate to be annotated as either inductive or coinductive. Assuming the \textit{stratification restriction}, a condition on predicate dependency in co-logic programs (co-LPs), a top-down procedural semantics (\textit{co-SLD derivation}) as well as an alternating fixpoint semantics has been given. In this paper, we present some extensions of co-LPs, especially focusing on the relationship with the existing alternating tree automata approaches to branching-time model checking. We first consider the \textit{local} stratification restriction to allow a more general class of co-LPs, so that we can encode the CTL satisfaction relation as a co-LP, which is a direct encoding of the standard alternating automata by Kupferman et al. Next, we consider non-stratified co-LPs based on the Horn \(\mu\)-calculus. We give a proof procedure, \textit{co-SLD derivation with the parity acceptance condition}, for non-stratified co-LPs, and show that it is sound and complete for a class of non-stratified co-LPs. Its application to a goal-directed top-down proof procedure for normal logic programs is also discussed.
For the entire collection see [Zbl 1320.68017].Towards Erlang verification by term rewriting.https://www.zbmath.org/1453.680322021-02-27T13:50:00+00:00"Vidal, Germán"https://www.zbmath.org/authors/?q=ai:vidal.germanSummary: This paper presents a transformational approach to the verification of Erlang programs. We define a stepwise transformation from (first-order) Erlang programs to (non-deterministic) term rewrite systems that compute an overapproximation of the original Erlang program. In this way, existing techniques for term rewriting become available. Furthermore, one can use narrowing as a symbolic execution extension of rewriting in order to design a verification technique. We illustrate our approach with some examples, including a deadlock analysis of a simple Erlang program.
For the entire collection see [Zbl 1320.68017].From outermost reduction semantics to abstract machine.https://www.zbmath.org/1453.680452021-02-27T13:50:00+00:00"Danvy, Olivier"https://www.zbmath.org/authors/?q=ai:danvy.olivier"Johannsen, Jacob"https://www.zbmath.org/authors/?q=ai:johannsen.jacobSummary: Reduction semantics is a popular format for small-step operational semantics of deterministic programming languages with computational effects. Each reduction semantics gives rise to a reduction-based normalization function where the reduction sequence is enumerated. Refocusing is a practical way to transform a reduction-based normalization function into a reduction-free one where the reduction sequence is not enumerated. This reduction-free normalization function takes the form of an abstract machine that navigates from one redex site to the next without systematically detouring via the root of the term to enumerate the reduction sequence, in contrast to the reduction-based normalization function.{
}We have discovered that refocusing does not apply as readily for reduction semantics that use an outermost reduction strategy and have overlapping rules where a contractum can be a proper subpart of a redex. In this article, we consider such an outermost reduction semantics with backward-overlapping rules, and we investigate how to apply refocusing to still obtain a reduction-free normalization function in the form of an abstract machine.
For the entire collection see [Zbl 1320.68017].A finite representation of the narrowing space.https://www.zbmath.org/1453.680542021-02-27T13:50:00+00:00"Nishida, Naoki"https://www.zbmath.org/authors/?q=ai:nishida.naoki"Vidal, Germán"https://www.zbmath.org/authors/?q=ai:vidal.germanSummary: Narrowing basically extends rewriting by allowing free variables in terms and by replacing matching with unification. As a consequence, the search space of narrowing becomes usually infinite, as in logic programming. In this paper, we introduce the use of some operators that allow one to always produce a finite data structure that still represents all the narrowing derivations. Furthermore, we extract from this data structure a novel, compact equational representation of the (possibly infinite) answers computed by narrowing for a given initial term. Both the finite data structure and the equational representation of the computed answers might be useful in a number of areas, like program comprehension, static analysis, program transformation, etc.
For the entire collection see [Zbl 1320.68017].A transformational approach to resource analysis with typed-norms.https://www.zbmath.org/1453.680412021-02-27T13:50:00+00:00"Albert, Elvira"https://www.zbmath.org/authors/?q=ai:albert.elvira"Genaim, Samir"https://www.zbmath.org/authors/?q=ai:genaim.samir"Gutiérrez, Raúl"https://www.zbmath.org/authors/?q=ai:gutierrez.raulSummary: In order to automatically infer the resource consumption of programs, analyzers track how \textit{data sizes} change along a program's execution. Typically, analyzers measure the sizes of data by applying norms which are mappings from data to natural numbers that represent the sizes of the corresponding data. When norms are defined by taking type information into account, they are named \textit{typed-norms}. The main contribution of this paper is a transformational approach to resource analysis with typed-norms. The analysis is based on a transformation of the program into an \textit{intermediate abstract program} in which each variable is abstracted with respect to all considered norms which are valid for its type. We also sketch a simple analysis that can be used to automatically infer the required, useful, typed-norms from programs.
For the entire collection see [Zbl 1320.68017].Information flow in object-oriented software.https://www.zbmath.org/1453.680272021-02-27T13:50:00+00:00"Beckert, Bernhard"https://www.zbmath.org/authors/?q=ai:beckert.bernhard"Bruns, Daniel"https://www.zbmath.org/authors/?q=ai:bruns.daniel"Klebanov, Vladimir"https://www.zbmath.org/authors/?q=ai:klebanov.vladimir"Scheben, Christoph"https://www.zbmath.org/authors/?q=ai:scheben.christoph"Schmitt, Peter H."https://www.zbmath.org/authors/?q=ai:schmitt.peter-h"Ulbrich, Mattias"https://www.zbmath.org/authors/?q=ai:ulbrich.mattiasSummary: This paper contributes to the investigation of object-sensitive information flow properties for sequential Java, i.e., properties that take into account information leakage through objects, as opposed to primitive values. We present two improvements to a popular object-sensitive non-interference property. Both reduce the burden on analysis and monitoring tools. We present a formalization of this property in a program logic -- JavaDL in our case -- which allows using an existing tool without requiring program modification. The third contribution is a novel fine-grained specification methodology. In our approach, arbitrary JavaDL terms (read `side-effect-free Java expressions') may be assigned a security level -- in contrast to security labels being attached to fields and variables only.
For the entire collection see [Zbl 1320.68017].Formalization and execution of linear algebra: from theorems to algorithms.https://www.zbmath.org/1453.682102021-02-27T13:50:00+00:00"Aransay, Jesús"https://www.zbmath.org/authors/?q=ai:aransay.jesus"Divasón, Jose"https://www.zbmath.org/authors/?q=ai:divason.joseSummary: In this work we present a formalization of the \textit{Rank Nullity} theorem of Linear Algebra in Isabelle/HOL. The formalization is of interest because of various reasons. First, it has been carried out based on the representation of mathematical structures proposed in the HOL Multivariate Analysis library of Isabelle/HOL (which is part of the standard distribution of the proof assistant). Hence, our proof shows the adequacy of such an infrastructure for the formalization of Linear Algebra. Moreover, we enrich the proof with an additional formalization of its \textit{computational} meaning; to this purpose, we choose to implement the Gauss-Jordan elimination algorithm for matrices over fields, prove it correct, and then apply the Isabelle code generation facility that permits to \textit{execute} the formalized algorithm. For the algorithm to be code generated, we use again the implementation of matrices available in the HOL Multivariate Analysis library, and enrich it with some necessary features. We report on the precise modifications that we introduce to get code execution from the original representation, and on the performance of the code obtained. We present an alternative verified type refinement of vectors that outperforms the original version. This refinement performs well enough as to be applied to the computation of the rank of some biomedical digital images. Our work proves itself as a suitable basis for the formalization of numerical Linear Algebra in HOL provers that can be successfully applied for computations of real case studies.
For the entire collection see [Zbl 1320.68017].A new method based on artificial neural networks for solving general nonlinear systems.https://www.zbmath.org/1453.651042021-02-27T13:50:00+00:00"Abbasnejad, H."https://www.zbmath.org/authors/?q=ai:abbasnejad.h"Jafarian, A."https://www.zbmath.org/authors/?q=ai:jafarian.amin|jafarian.ali-a|jafarian.ahmad|jafarian.amiri-seyyed-majidSummary: Implementation of the amazing features of the human brain in an artificial system has long been considered. It seems that simulating the human nervous system is a recent development in applied mathematics and computer sciences. The objective of this research is to introduce an efficient iterative method based on artificial neural networks for numerically solving nonlinear algebraic systems of polynomial equations. The method first performs some simple algebraic manipulations to convert the origin system to an approximated unconstrained optimisation problem. Subsequently, the resulting nonlinear minimisation problem is solved iteratively using the neural networks approach. For this aim, a suitable five-layer feed-back neural architecture is formed and trained using a back-propagation supervised learning algorithm which is based on the gradient descent rule. Ultimately, some numerical examples with comparisons are given to demonstrate the high accuracy and the ease of implementation of the present technique over other classical methods.Some bounds for skewed \(\alpha\)-Jensen-Shannon divergence.https://www.zbmath.org/1453.940372021-02-27T13:50:00+00:00"Yamano, Takuya"https://www.zbmath.org/authors/?q=ai:yamano.takuyaSummary: Based on the skewed Kullback-Leibler divergence introduced in the natural language processing, we derive the upper and lower bounds on the skewed version of the Jensen-Shannon divergence and investigate properties of them. In the process, we generalize the Bretagnolle-Huber inequality that offers an upper bound on the skewed Kullback-Leibler divergence. We further present how the skewed Jensen-Shannon divergence is bounded from below in terms of accuracy mismatch.Distributed adaptive optimization for generalized linear multiagent systems.https://www.zbmath.org/1453.931322021-02-27T13:50:00+00:00"Liu, Shuxin"https://www.zbmath.org/authors/?q=ai:liu.shuxin"Jiang, Haijun"https://www.zbmath.org/authors/?q=ai:jiang.haijun"Zhang, Liwei"https://www.zbmath.org/authors/?q=ai:zhang.liwei.1|zhang.liwei"Mei, Xuehui"https://www.zbmath.org/authors/?q=ai:mei.xuehuiSummary: In this paper, the edge-based and node-based adaptive algorithms are established, respectively, to solve the distribution convex optimization problem. The algorithms are based on multiagent systems with general linear dynamics; each agent uses only local information and cooperatively reaches the minimizer. Compared with existing results, a damping term in the adaptive law is introduced for the adaptive algorithms, which makes the algorithms more robust. Under some sufficient conditions, all agents asymptotically converge to the consensus value which minimizes the cost function. An example is provided for the effectiveness of the proposed algorithms.Pruning algorithms for pretropisms of Newton polytopes.https://www.zbmath.org/1453.520152021-02-27T13:50:00+00:00"Sommars, Jeff"https://www.zbmath.org/authors/?q=ai:sommars.jeff"Verschelde, Jan"https://www.zbmath.org/authors/?q=ai:verschelde.janSummary: Pretropisms are candidates for the leading exponents of Puiseux series that represent positive dimensional solution sets of polynomial systems. We propose a new algorithm to both horizontally and vertically prune the tree of edges of a tuple of Newton polytopes. We provide experimental results with our preliminary implementation in Sage that demonstrates that our algorithm compares favorably to the definitional algorithm.
For the entire collection see [Zbl 1346.68010].On the fixed-parameter tractability of the maximum connectivity improvement problem.https://www.zbmath.org/1453.681302021-02-27T13:50:00+00:00"Corò, Federico"https://www.zbmath.org/authors/?q=ai:coro.federico"D'Angelo, Gianlorenzo"https://www.zbmath.org/authors/?q=ai:dangelo.gianlorenzo"Mkrtchyan, Vahan"https://www.zbmath.org/authors/?q=ai:mkrtchyan.vahan-vSummary: In the Maximum Connectivity Improvement (MCI) problem, we are given a directed graph \(G = (V,E)\) and an integer \(B\) and we are asked to find \(B\) new edges to be added to \(G\) in order to maximize the number of connected pairs of vertices in the resulting graph. The MCI problem has been studied from the approximation point of view. In this paper, we approach it from the parameterized complexity perspective in the case of directed acyclic graphs. We show several hardness and algorithmic results with respect to different natural parameters. Our main result is that the problem is W[2]-hard for parameter \(B\) and it is FPT for parameters |\(V |- B\) and \(\nu\), the matching number of \(G\). We further characterize the MCI problem with respect to other complementary parameters.A lower bound for computing Lagrange's real root bound.https://www.zbmath.org/1453.260102021-02-27T13:50:00+00:00"Prabhakar, Swaroop N."https://www.zbmath.org/authors/?q=ai:prabhakar.swaroop-n"Sharma, Vikram"https://www.zbmath.org/authors/?q=ai:sharma.vikram|sharma.vikram.1Summary: In this paper, we study a bound on the real roots of a polynomial by Lagrange. From known results in the literature, it follows that Lagrange's bound is also a bound on the absolute positiveness of a polynomial. A simple \(O(n\log n)\) algorithm described in [\textit{K. Mehlhorn} and \textit{S. Ray} [J. Symb. Comput. 45, No. 6, 677--683 (2010; Zbl 1206.11151)] can be used to compute the bound. Our main result is that this is optimal in the real RAM model. Our paper explores the tradeoff between improving the quality of bounds on absolute positiveness and their computational complexity.
For the entire collection see [Zbl 1346.68010].Quadric arrangement in classifying rigid motions of a 3D digital image.https://www.zbmath.org/1453.940152021-02-27T13:50:00+00:00"Pluta, Kacper"https://www.zbmath.org/authors/?q=ai:pluta.kacper"Moroz, Guillaume"https://www.zbmath.org/authors/?q=ai:moroz.guillaume"Kenmochi, Yukiko"https://www.zbmath.org/authors/?q=ai:kenmochi.yukiko"Romon, Pascal"https://www.zbmath.org/authors/?q=ai:romon.pascalSummary: Rigid motions are fundamental operations in image processing. While bijective and isometric in \(\mathbb {R}^3\), they lose these properties when digitized in \(\mathbb {Z}^3\). To understand how the digitization of 3D rigid motions affects the topology and geometry of a chosen image patch, we classify the rigid motions according to their effect on the image patch. This classification can be described by an arrangement of hypersurfaces in the parameter space of 3D rigid motions of dimension six. However, its high dimensionality and the existence of degenerate cases make a direct application of classical techniques, such as cylindrical algebraic decomposition or critical point method, difficult. We show that this problem can be first reduced to computing sample points in an arrangement of quadrics in the 3D parameter space of rotations. Then we recover information about remaining three parameters of translation. We implemented an ad-hoc variant of state-of-the-art algorithms and applied it to an image patch of cardinality 7. This leads to an arrangement of 81 quadrics and we recovered the classification in less than one hour on a machine equipped with 40 cores.
For the entire collection see [Zbl 1346.68010].A survey of satisfiability modulo theory.https://www.zbmath.org/1453.681162021-02-27T13:50:00+00:00"Monniaux, David"https://www.zbmath.org/authors/?q=ai:monniaux.david-pSummary: Satisfiability modulo theory (SMT) consists in testing the satisfiability of first-order formulas over linear integer or real arithmetic, or other theories. In this survey, we explain the combination of propositional satisfiability and decision procedures for conjunctions known as DPLL(T), and the alternative ``natural domain'' approaches. We also cover quantifiers, Craig interpolants, polynomial arithmetic, and how SMT solvers are used in automated software analysis.
For the entire collection see [Zbl 1346.68010].Using sparse interpolation in Hensel lifting.https://www.zbmath.org/1453.130712021-02-27T13:50:00+00:00"Monagan, Michael"https://www.zbmath.org/authors/?q=ai:monagan.michael-b"Tuncer, Baris"https://www.zbmath.org/authors/?q=ai:tuncer.barisSummary: The standard approach to factor a multivariate polynomial in \(\mathbb {Z}[x_1, x_2, \ldots, x_n]\) is to factor a univariate image in \(\mathbb {Z}[x_1]\) then lift the factors of the image one variable at a time using Hensel lifting to recover the multivariate factors. At each step one must solve a multivariate polynomial Diophantine equation. For polynomials in many variables with many terms we find that solving these multivariate Diophantine equations dominates the factorization time. In this paper we explore the use of sparse interpolation methods, originally introduced by Zippel, to speed this up. We present experimental results in Maple showing that we are able to dramatically speed this up and thereby achieve a good improvement for multivariate polynomial factorization.
For the entire collection see [Zbl 1346.68010].Hybrid adaptive random testing.https://www.zbmath.org/1453.680592021-02-27T13:50:00+00:00"Nikravan, Esmaeel"https://www.zbmath.org/authors/?q=ai:nikravan.esmaeel"Parsa, Saeed"https://www.zbmath.org/authors/?q=ai:parsa.saeedSummary: Adaptive random testing (ART) subsumes a family of random testing techniques with an effective improvement. It is based on the observation that failure causing inputs tend to be clustered together. Hence the ART methods spread test cases more evenly within the input domain to improve the fault-detection capability of random testing. There have been several implementations of ART based on different intuitions and principles with their own advantages and disadvantages. In the different variants of ART methods, the majority of them use a variety of distance calculations, with corresponding computational overhead. The newly methods try to decrease computational overhead while maintaining the performance through partitioning the input domain. We outline a new partitioning-based ART algorithm with a hybrid search method and demonstrate experimentally that it can further improve the performance, with considerably lower overhead than other ART algorithms.Integrating fuzzy formal concept analysis and rough set theory for the semantic web.https://www.zbmath.org/1453.681782021-02-27T13:50:00+00:00"Formica, Anna"https://www.zbmath.org/authors/?q=ai:formica.annaSummary: Formal Concept Analysis and Rough Set Theory provide two mathematical frameworks in information management which have been developed almost independently in the past. Currently, their integration is revealing very interesting in different research fields, such as knowledge discovery, data mining, information retrieval, elearning, and ontology engineering. In this paper, we show how Rough Set Theory can be employed in combination with a generalization of Formal Concept Analysis for modeling uncertainty information (Fuzzy Formal Concept Analysis) to perform Semantic Web search. In particular this paper presents an updated evaluation of a previous proposal of the author which has been addressed because of the increasing interest in this topic and, at the same time, the absence in the literature of significant proposals combining these two frameworks.A generalised branch-and-bound approach and its application in SAT modulo nonlinear integer arithmetic.https://www.zbmath.org/1453.901862021-02-27T13:50:00+00:00"Kremer, Gereon"https://www.zbmath.org/authors/?q=ai:kremer.gereon"Corzilius, Florian"https://www.zbmath.org/authors/?q=ai:corzilius.florian"Ábrahám, Erika"https://www.zbmath.org/authors/?q=ai:abraham.erikaSummary: The branch-and-bound framework has already been successfully applied in SAT-modulo-theories (SMT) solvers to check the satisfiability of linear integer arithmetic formulas. In this paper we study how it can be used in SMT solvers for non-linear integer arithmetic on top of two real-algebraic decision procedures: the virtual substitution and the cylindrical algebraic decomposition methods. We implemented this approach in our SMT solver SMT-RAT and compared it with the currently best performing SMT solvers for this logic, which are mostly based on bit-blasting. Furthermore, we implemented a combination of our approach with bit-blasting that outperforms the state-of-the-art SMT solvers for most instances.
For the entire collection see [Zbl 1346.68010].Smart grid short-term load estimation model based on BP neural network.https://www.zbmath.org/1453.900402021-02-27T13:50:00+00:00"Shi, Jianqiang"https://www.zbmath.org/authors/?q=ai:shi.jianqiang"Chengchao, Shi"https://www.zbmath.org/authors/?q=ai:chengchao.shi"Lei, Han"https://www.zbmath.org/authors/?q=ai:lei.han"Mengxi, Xu"https://www.zbmath.org/authors/?q=ai:mengxi.xuSummary: As reasonable short-term load estimation system can provide reliable support for the operating, planning and designing of the smart grid, in this paper, we propose an effective smart grid short-term load estimation method. Different types of data are input to the BP neural network and then the output of BP neural network is represented as the load estimation results. Although BP neural network can approximate any nonlinear continuous function with the condition of a specific structure and suitable weights, it is very difficult to obtain the global minimum result. In order to obtain the global optimum solution in short-term load estimation, we exploit the genetic algorithm to optimise the weights and thresholds of the BP neural network, which is the main advantage of the proposed model. Finally, experimental results demonstrate that the proposed method can estimate short-term load of smart grid with higher accuracy and it can also clearly show the load requirement distribution in different time period.Optimisation of makespan of a flow shop problem using multi layer neural network.https://www.zbmath.org/1453.900702021-02-27T13:50:00+00:00"Kumar, Harendra"https://www.zbmath.org/authors/?q=ai:kumar.harendra"Giri, Shailendra"https://www.zbmath.org/authors/?q=ai:giri.shailendraSummary: This paper presents an approach based on a multi layer neural network algorithm (MLNNA) to find a sequence of jobs for flow shop scheduling problems with the objective of minimise the makespan. The purpose of this paper is to develop an artificial intelligence and trained a neural network model for solving the flow shop scheduling problem which gives a best jobs sequence with the objective of minimise the makespan. The effectiveness of the proposed MLNNA method is compared with many problems selected from different papers. A large number of problems are solved with the present MLNNA model and it is found suitable and workable in all the cases.A symbolic investigation of the influence of aerodynamic forces on satellite equilibria.https://www.zbmath.org/1453.700112021-02-27T13:50:00+00:00"Gutnik, Sergey A."https://www.zbmath.org/authors/?q=ai:gutnik.sergey-a"Sarychev, Vasily A."https://www.zbmath.org/authors/?q=ai:sarychev.vasily-aSummary: Computer algebra methods are used to study the properties of a nonlinear algebraic system that determines equilibrium orientations of a satellite moving along a circular orbit under the action of gravitational and aerodynamic torques. An algorithm for the construction of a Gröbner basis is proposed for determining the equilibrium orientations of a satellite with given principal central moments of inertia and given aerodynamic torque in special cases, when the center of pressure of aerodynamic forces is located in one of the principal central planes of inertia of the satellite. The conditions of the equilibria existence are obtained, depending on three dimensionless parameters of the problem. The number of equilibria depending on the parameters is found by the analysis of real roots of algebraic equations from the constructed Gröbner basis. The evolution of domains with fixed number of equilibria from 24 to 8 is investigated in detail. All bifurcation values of the system parameters corresponding to the qualitative change of these domains are determined.
For the entire collection see [Zbl 1346.68010].Symbolic algorithm for generating irreducible rotational-vibrational bases of point groups.https://www.zbmath.org/1453.200022021-02-27T13:50:00+00:00"Gusev, A. A."https://www.zbmath.org/authors/?q=ai:gusev.alexander-a"Gerdt, V. P."https://www.zbmath.org/authors/?q=ai:gerdt.vladimir-p"Vinitsky, S. I."https://www.zbmath.org/authors/?q=ai:vinitsky.sergue-i"Derbov, V. L."https://www.zbmath.org/authors/?q=ai:derbov.vladimir-l"Góźdź, A."https://www.zbmath.org/authors/?q=ai:gozdz.andrzej"Pȩdrak, A."https://www.zbmath.org/authors/?q=ai:pedrak.a"Szulerecka, A."https://www.zbmath.org/authors/?q=ai:szulerecka.a"Dobrowolski, A."https://www.zbmath.org/authors/?q=ai:dobrowolski.arkadiuszSummary: Symbolic algorithm implemented in computer algebra system for generating irreducible representations of the point symmetry groups in the rotor + shape vibrational space of a nuclear collective model in the intrinsic frame is presented. The method of generalized projection operators is used. The generalized projection operators for the intrinsic group acting in the space \(\mathrm {L}^2(\mathrm{SO(3)})\) and in the space spanned by the eigenfunctions of a multidimensional harmonic oscillator are constructed. The efficiency of the scheme is investigated by calculating the bases of irreducible representations subgroup \(\overline{\mathrm{D}}_{4y}\) of octahedral group in the intrinsic frame of a quadrupole-octupole nuclear collective model.
For the entire collection see [Zbl 1346.68010].Symbolic-numeric algorithms for solving BVPs for a system of ODEs of the second order: multichannel scattering and eigenvalue problems.https://www.zbmath.org/1453.651872021-02-27T13:50:00+00:00"Gusev, A. A."https://www.zbmath.org/authors/?q=ai:gusev.alexander-a"Gerdt, V. P."https://www.zbmath.org/authors/?q=ai:gerdt.vladimir-p"Hai, L. L."https://www.zbmath.org/authors/?q=ai:hai.lili"Derbov, V. L."https://www.zbmath.org/authors/?q=ai:derbov.vladimir-l"Vinitsky, S. I."https://www.zbmath.org/authors/?q=ai:vinitsky.sergue-i"Chuluunbaatar, O."https://www.zbmath.org/authors/?q=ai:chuluunbaatar.ochbadrakhSummary: Symbolic-numeric algorithms for solving multichannel scattering and eigenvalue problems of the waveguide or tunneling type for systems of ODEs of the second order with continuous and piecewise continuous coefficients on an axis are presented. The boundary-value problems are formulated and discretized using the FEM on a finite interval with interpolating Hermite polynomials that provide the required continuity of the derivatives of the approximated solutions. The accuracy of the approximate solutions of the boundary-value problems, reduced to a finite interval, is checked by comparing them with the solutions of the original boundary-value problems on the entire axis, which are calculated by matching the fundamental solutions of the ODE system. The efficiency of the algorithms implemented in the computer algebra system Maple is demonstrated by calculating the resonance states of a multichannel scattering problem on the axis for clusters of a few identical particles tunneling through Gaussian barriers.
For the entire collection see [Zbl 1346.68010].Cross-validation with confidence.https://www.zbmath.org/1453.625802021-02-27T13:50:00+00:00"Lei, Jing"https://www.zbmath.org/authors/?q=ai:lei.jingSummary: Cross-validation is one of the most popular model and tuning parameter selection methods in statistics and machine learning. Despite its wide applicability, traditional cross-validation methods tend to overfit, due to the ignorance of the uncertainty in the testing sample. We develop a novel statistically principled inference tool based on cross-validation that takes into account the uncertainty in the testing sample. This method outputs a set of highly competitive candidate models containing the optimal one with guaranteed probability. As a consequence, our method can achieve consistent variable selection in a classical linear regression setting, for which existing cross-validation methods require unconventional split ratios. When used for tuning parameter selection, the method can provide an alternative trade-off between prediction accuracy and model interpretability than existing variants of cross-validation. We demonstrate the performance of the proposed method in several simulated and real data examples.The complexity of cylindrical algebraic decomposition with respect to polynomial degree.https://www.zbmath.org/1453.130792021-02-27T13:50:00+00:00"England, Matthew"https://www.zbmath.org/authors/?q=ai:england.matthew"Davenport, James H."https://www.zbmath.org/authors/?q=ai:davenport.james-haroldSummary: Cylindrical algebraic decomposition (CAD) is an important tool for working with polynomial systems, particularly quantifier elimination. However, it has complexity doubly exponential in the number of variables. The base algorithm can be improved by adapting to take advantage of any equational constraints (ECs): equations logically implied by the input. Intuitively, we expect the double exponent in the complexity to decrease by one for each EC. In ISSAC 2015 the present authors [Proceedings of the 40th international symposium on symbolic and algebraic computation. New York, NY: ACM, 165--172 (2015; Zbl 1346.68283)] proved this for the factor in the complexity bound dependent on the number of polynomials in the input. However, the other term, that dependent on the degree of the input polynomials, remained unchanged.{
} In the present paper the authors investigate how CAD in the presence of ECs could be further refined using the technology of Gröbner bases to move towards the intuitive bound for polynomial degree.
For the entire collection see [Zbl 1346.68010].Bifurcation of a fractional-order delayed malware propagation model in social networks.https://www.zbmath.org/1453.340942021-02-27T13:50:00+00:00"Xu, Changjin"https://www.zbmath.org/authors/?q=ai:xu.changjin"Liao, Maoxin"https://www.zbmath.org/authors/?q=ai:liao.maoxin"Li, Peiluan"https://www.zbmath.org/authors/?q=ai:li.peiluanSummary: In recent years, with the rapid development of the Internet and the Internet of Things, network security is urgently needed. Malware becomes a major threat to network security. Thus, the study on malware propagation model plays an important role in network security. In the past few decades, numerous researchers put up various kinds of malware propagation models to analyze the dynamic interaction. However, many works are only concerned with the integer-order malware propagation models, while the investigation on fractional-order ones is very few. In this paper, based on the earlier works, we will put up a new fractional-order delayed malware propagation model. Letting the delay be bifurcation parameter and analyzing the corresponding characteristic equations of considered system, we will establish a set of new sufficient conditions to guarantee the stability and the existence of Hopf bifurcation of fractional-order delayed malware propagation model. The study shows that the delay and the fractional order have important effect on the stability and Hopf bifurcation of considered system. To check the correctness of theoretical analyses, we carry out some computer simulations. At last, a simple conclusion is drawn. The derived results of this paper are completely innovative and play an important guiding role in network security.Algorithmic computation of polynomial amoebas.https://www.zbmath.org/1453.141452021-02-27T13:50:00+00:00"Bogdanov, D. V."https://www.zbmath.org/authors/?q=ai:bogdanov.d-v.1"Kytmanov, A. A."https://www.zbmath.org/authors/?q=ai:kytmanov.a-a"Sadykov, T. M."https://www.zbmath.org/authors/?q=ai:sadykov.t-mSummary: We present algorithms for computation and visualization of polynomial amoebas, their contours, compactified amoebas and sections of three-dimensional amoebas by two-dimensional planes. We also provide a method and an algorithm for the computation of polynomials whose amoebas exhibit the most complicated topology among all polynomials with a fixed Newton polytope. The presented algorithms are implemented in computer algebra systems Matlab 8 and Mathematica 9.
For the entire collection see [Zbl 1346.68010].Computing all space curve solutions of polynomial systems by polyhedral methods.https://www.zbmath.org/1453.130842021-02-27T13:50:00+00:00"Bliss, Nathan"https://www.zbmath.org/authors/?q=ai:bliss.nathan"Verschelde, Jan"https://www.zbmath.org/authors/?q=ai:verschelde.janSummary: A polyhedral method to solve a system of polynomial equations exploits its sparse structure via the Newton polytopes of the polynomials. We propose a hybrid symbolic-numeric method to compute a Puiseux series expansion for every space curve that is a solution of a polynomial system. The focus of this paper concerns the difficult case when the leading powers of the Puiseux series of the space curve are contained in the relative interior of a higher dimensional cone of the tropical prevariety. We show that this difficult case does not occur for polynomials with generic coefficients. To resolve this case, we propose to apply polyhedral end games to recover tropisms hidden in the tropical prevariety.
For the entire collection see [Zbl 1346.68010].Improved computation of involutive bases.https://www.zbmath.org/1453.130742021-02-27T13:50:00+00:00"Binaei, Bentolhoda"https://www.zbmath.org/authors/?q=ai:binaei.bentolhoda"Hashemi, Amir"https://www.zbmath.org/authors/?q=ai:hashemi.amir"Seiler, Werner M."https://www.zbmath.org/authors/?q=ai:seiler.werner-mSummary: In this paper, we describe improved algorithms to compute Janet and Pommaret bases. To this end, based on the method proposed by \textit{H. Möller} et al. [International Symposium on Symbolic and Algebraic Computation, ISSAC 1992. ACM Press, 320--328 (1992; Zbl 0925.13010)], we present a more efficient variant of Gerdt's algorithm (than the algorithm presented in [\textit{V. P. Gerdt}, J. Symb. Comput. 59, 1--20 (2013; Zbl 1435.68391)]) to compute minimal involutive bases. Furthermore, by using an involutive version of the Hilbert driven technique along with the new variant of Gerdt's algorithm, we modify the algorithm given in [\textit{W. M. Seiler}, Appl. Algebra Eng. Commun. Comput. 20, No. 3-4, 261--338 (2009; Zbl 1175.13011)] to compute a linear change of coordinates for a given homogeneous ideal so that the new ideal (after performing this change) possesses a finite Pommaret basis. All the proposed algorithms have been implemented in \textsc{Maple} and their efficiency is discussed via a set of benchmark polynomials.
For the entire collection see [Zbl 1346.68010].Symbolic manipulation of flows of nonlinear evolution equations, with application in the analysis of split-step time integrators.https://www.zbmath.org/1453.651162021-02-27T13:50:00+00:00"Auzinger, Winfried"https://www.zbmath.org/authors/?q=ai:auzinger.winfried"Hofstätter, Harald"https://www.zbmath.org/authors/?q=ai:hofstatter.harald"Koch, Othmar"https://www.zbmath.org/authors/?q=ai:koch.othmarSummary: We describe a package realized in the Julia programming language which performs symbolic manipulations applied to nonlinear evolution equations, their flows, and commutators of such objects. This tool was employed to perform contrived computations arising in the analysis of the local error of operator splitting methods. It enabled the proof of the convergence of the basic method and of the asymptotical correctness of a defect-based error estimator. The performance of our package is illustrated on several examples.
For the entire collection see [Zbl 1346.68010].On the differential and full algebraic complexities of operator matrices transformations.https://www.zbmath.org/1453.340142021-02-27T13:50:00+00:00"Abramov, Sergei A."https://www.zbmath.org/authors/?q=ai:abramov.sergei-aSummary: We consider \(n\times n\)-matrices whose entries are scalar ordinary differential operators of order \(\le d\) over a constructive differential field \(K\). We show that to choose an algorithm to solve a problem related to such matrices it is reasonable to take into account the complexity measured as the number not only of arithmetic operations in \(K\) in the worst case but of all operations including differentiation. The algorithms that have the same complexity in terms of the number of arithmetic operations can though differ in the context of the full algebraic complexity that includes the necessary differentiations. Following this, we give a complexity analysis, first, of finding a superset of the set of singular points for solutions of a system of linear ordinary differential equations, and, second, of the unimodularity testing for an operator matrix and of constructing the inverse matrix if it exists.
For the entire collection see [Zbl 1346.68010].Martin-Löf random generalized Poisson processes.https://www.zbmath.org/1453.030402021-02-27T13:50:00+00:00"Axon, Logan"https://www.zbmath.org/authors/?q=ai:axon.logan-mSummary: Martin-Löf randomness was originally defined and studied in the context of the Cantor space \(2^\omega\). In [the author, J. Symb. Log. 80, No. 2, 359--383 (2015; Zbl 1375.03048)] probability theoretic random closed sets (RACS) are used as the foundation for the study of Martin-Löf randomness in spaces of closed sets. We use that framework to explore Martin-Löf randomness for the space of closed subsets of \(\mathbb{R}\) and a particular family of measures on this space, the generalized Poisson processes. This gives a novel class of Martin-Löf random closed subsets of \(\mathbb{R}\). We describe some of the properties of these Martin-Löf random closed sets; one result establishes that a real number is Martin-Löf random if and only if it is contained in some Martin-Löf random closed set.Dynamics modeling and analysis of SIS epidemic spreading in cluster networks.https://www.zbmath.org/1453.923222021-02-27T13:50:00+00:00"Tian, Jingjing"https://www.zbmath.org/authors/?q=ai:tian.jingjing"Li, Shuping"https://www.zbmath.org/authors/?q=ai:li.shupingSummary: In this paper, we propose and study an SIS epidemic model with clustering characteristics based on networks. Using the method of the existence of positive equilibrium point, we obtain the formula of the basic reproduction number \(R_0\). Furthermore, by constructing Lyapunov function, we also prove that the disease-free equilibrium of the model is globally asymptotically stable when \(R_0 < 1\). When \(R_0 > 1\), there is only one positive equilibrium point which is globally asymptotically stable. It is also shown that the infection proportion and the basic reproduction number \(R_0\) increases as the clustering coefficient increases when the average degree of networks is fixed.General six-step discrete-time Zhang neural network for time-varying tensor absolute value equations.https://www.zbmath.org/1453.920222021-02-27T13:50:00+00:00"Sun, Min"https://www.zbmath.org/authors/?q=ai:sun.min"Liu, Jing"https://www.zbmath.org/authors/?q=ai:liu.jing.1Summary: This article presents a general six-step discrete-time Zhang neural network (ZNN) for time-varying tensor absolute value equations. Firstly, based on the Taylor expansion theory, we derive a general Zhang et al. discretization (ZeaD) formula, i.e., a general Taylor-type 1-step-ahead numerical differentiation rule for the first-order derivative approximation, which contains two free parameters. Based on the bilinear transform and the Routh-Hurwitz stability criterion, the effective domain of the two free parameters is analyzed, which can ensure the convergence of the general ZeaD formula. Secondly, based on the general ZeaD formula, we design a general six-step discrete-time ZNN (DTZNN) for time-varying tensor absolute value equations (TVTAVEs), whose steady-state residual error changes in a higher order manner than those presented in the literature. Meanwhile, the feasible region of its step size, which determines its convergence, is also studied. Finally, experiment results corroborate that the general six-step DTZNN model is quite efficient for TVTAVE solving.The conjugacy problem in free solvable groups and wreath products of abelian groups is in \(\mathsf{TC}^0\).https://www.zbmath.org/1453.200452021-02-27T13:50:00+00:00"Miasnikov, Alexei"https://www.zbmath.org/authors/?q=ai:myasnikov.alexei-g"Vassileva, Svetla"https://www.zbmath.org/authors/?q=ai:vassileva.svetla"Weiß, Armin"https://www.zbmath.org/authors/?q=ai:weiss.arminSummary: We show that the conjugacy problem in a wreath product \(A \wr B\) is uniform-\(\mathsf{TC}^0\)-Turing-reducible to the conjugacy problem in the factors \(A\) and \(B\) and the power problem in \(B\). If \(B\) is torsion free, the power problem in \(B\) can be replaced by the slightly weaker cyclic submonoid membership problem in \(B\). Moreover, if \(A\) is abelian, the cyclic subgroup membership problem suffices, which itself is uniform-\(\mathsf{AC}^0\)-many-one-reducible to the conjugacy problem in \(A \wr B\). Furthermore, under certain natural conditions, we give a uniform \(\mathsf{TC}^0\) Turing reduction from the power problem in \(A \wr B\) to the power problems of \(A\) and \(B\). Together with our first result, this yields a uniform \(\mathsf{TC}^0\) solution to the conjugacy problem in iterated wreath products of abelian groups -- and, by the Magnus embedding, also in free solvable groups.Linear algebra for computing Gröbner bases of linear recursive multidimensional sequences.https://www.zbmath.org/1453.682222021-02-27T13:50:00+00:00"Berthomieu, Jérémy"https://www.zbmath.org/authors/?q=ai:berthomieu.jeremy"Boyer, Brice"https://www.zbmath.org/authors/?q=ai:boyer.brice"Faugère, Jean-Charles"https://www.zbmath.org/authors/?q=ai:faugere.jean-charlesSummary: The so-called Berlekamp-Massey-Sakata algorithm computes a Gröbner basis of a 0-dimensional ideal of relations satisfied by an input table. It extends the Berlekamp-Massey algorithm to \(n\)-dimensional tables, for \(n > 1\).
We investigate this problem and design several algorithms for computing such a Gröbner basis of an ideal of relations using linear algebra techniques. The first one performs a lot of table queries and is analogous to a change of variables on the ideal of relations.
As each query to the table can be expensive, we design a second algorithm requiring fewer queries, in general. This FGLM-like algorithm allows us to compute the relations of the table by extracting a full rank submatrix of a \textit{multi-Hankel} matrix (a multivariate generalization of Hankel matrices).
Under some additional assumptions, we make a third, adaptive, algorithm and reduce further the number of table queries. Then, we relate the number of queries of this third algorithm to the \textit{geometry} of the final staircase and we show that it is essentially linear in the size of the output when the staircase is convex. As a direct application to this, we decode \(n\)-cyclic codes, a generalization in dimension \(n\) of Reed Solomon codes.
We show that the multi-Hankel matrices are heavily structured when using the LEX ordering and that we can speed up the computations using fast algorithms for quasi-Hankel matrices. Finally, we design algorithms for computing the generating series of a linear recursive table.An advanced persistent distributed denial-of-service attacked dynamical model on networks.https://www.zbmath.org/1453.680212021-02-27T13:50:00+00:00"Zhang, Chunming"https://www.zbmath.org/authors/?q=ai:zhang.chunming"Peng, Junbiao"https://www.zbmath.org/authors/?q=ai:peng.junbiao"Xiao, Jingwei"https://www.zbmath.org/authors/?q=ai:xiao.jingweiSummary: The advanced persistent distributed denial-of-service (APDDoS) attack does a serious harm to cyber security. Establishing a mathematical model to accurately predict APDDoS attack on networks is still an important problem that needs to be solved. Therefore, to help us understand the attack mechanisms of APDDoS on networks, this paper first puts forward a novel dynamical model of APDDoS attack on networks. A systematic analysis of this new model shows that the maximum eigenvalue of the networks is a vital factor that determines the success or failure of the attack. What is more, a new sufficient condition for the global stability of attack-free equilibrium is obtained. The global attractivity of attacked equilibrium has also been proved. Eventually, this paper gives some numerical simulations to show the main results.Computing abelian string regularities based on RLE.https://www.zbmath.org/1453.682302021-02-27T13:50:00+00:00"Sugimoto, Shiho"https://www.zbmath.org/authors/?q=ai:sugimoto.shiho"Noda, Naoki"https://www.zbmath.org/authors/?q=ai:noda.naoki"Inenaga, Shunsuke"https://www.zbmath.org/authors/?q=ai:inenaga.shunsuke"Bannai, Hideo"https://www.zbmath.org/authors/?q=ai:bannai.hideo"Takeda, Masayuki"https://www.zbmath.org/authors/?q=ai:takeda.masayukiSummary: Two strings \(x\) and \(y\) are said to be abelian equivalent if \(x\) is a permutation of \(y\), or vice versa. If a string \(z\) satisfies \(z = xy\) with \(x\) and \(y\) being abelian equivalent, then \(z\) is said to be an abelian square. If a string \(w\) can be factorized into a sequence \(v_1,\ldots,v_s\) of strings such that \(v_1,\dots,v_{s-1}\) are all abelian equivalent and \(v_s\) is a substring of a permutation of \(v_1\), then \(w\) is said to have a regular abelian period \((p,t)\) where \(p=|v_1|\) and \(t=|v_s|\). If a substring \(w_1[i..i+\ell-1]\) of a string \(w_1\) and a substring \(w_2[j..j+\ell-1]\) of another string \(w_2\) are abelian equivalent, then the substrings are said to be a common abelian factor of \(w_1\) and \(w_2\) and if the length \(\ell\) is the maximum of such then the substrings are said to be a longest common abelian factor of \(w_1\) and \(w_2\). We propose efficient algorithms which compute these abelian regularities using the run length encoding (RLE) of strings. For a given string \(w\) of length \(n\) whose RLE is of size \(m\), we propose algorithms which compute all abelian squares occurring in \(w\) in \(O(mn)\) time, and all regular abelian periods of \(w\) in \(O(mn)\) time. For two given strings \(w_1\) and \(w_2\) of total length \(n\) and of total RLE size \(m\), we propose an algorithm which computes all longest common abelian factors in \(O(m^2n)\) time.
For the entire collection see [Zbl 1386.68003].Querying relational event graphs using colored range searching data structures.https://www.zbmath.org/1453.681282021-02-27T13:50:00+00:00"Chanchary, Farah"https://www.zbmath.org/authors/?q=ai:chanchary.farah"Maheshwari, Anil"https://www.zbmath.org/authors/?q=ai:maheshwari.anil"Smid, Michiel"https://www.zbmath.org/authors/?q=ai:smid.michiel-h-mSummary: We present a general approach for analyzing structural parameters of a relational event graph within arbitrary query time intervals using colored range query data structures. Relational event graphs generally represent social network datasets, where each graph edge carries a timestamp. We provide data structures based on colored range searching to efficiently compute several graph parameters (e.g., density, neighborhood overlap, \(h\)-index).
For the entire collection see [Zbl 1355.68015].Compression of dynamic graphs generated by a duplication model.https://www.zbmath.org/1453.680752021-02-27T13:50:00+00:00"Turowski, Krzysztof"https://www.zbmath.org/authors/?q=ai:turowski.krzysztof"Magner, Abram"https://www.zbmath.org/authors/?q=ai:magner.abram"Szpankowski, Wojciech"https://www.zbmath.org/authors/?q=ai:szpankowski.wojciechSummary: We continue building up the information theory of non-sequential data structures such as trees, sets, and graphs. In this paper, we consider dynamic graphs generated by a full duplication model in which a new vertex selects an existing vertex and copies all of its neighbors. We ask how many bits are needed to describe the labeled and unlabeled versions of such graphs. We first estimate entropies of both versions and then present asymptotically optimal compression algorithms up to two bits. Interestingly, for the full duplication model the labeled version needs \(\Theta (n)\) bits while its unlabeled version (structure) can be described by \(\Theta (\log n)\) bits due to significant amount of symmetry (i.e. large average size of the automorphism group of sample graphs).On the approximability of the stable matching problem with ties of size two.https://www.zbmath.org/1453.682122021-02-27T13:50:00+00:00"Chiang, Robert"https://www.zbmath.org/authors/?q=ai:chiang.robert"Pashkovich, Kanstantsin"https://www.zbmath.org/authors/?q=ai:pashkovich.kanstantsinSummary: The stable matching problem is one of the central problems of algorithmic game theory. If participants are allowed to have ties, the problem of finding a stable matching of maximum cardinality is an \(\mathcal{NP}\)-hard problem, even when the ties are of size two. Moreover, in this setting it is UGC-hard to provide an approximation with a constant factor smaller than 4/3. In this paper, we give a tight analysis of an approximation algorithm given by
\textit{C.-C. Huang} and \textit{T. Kavitha} [Math. Program. 154, No. 1--2 (B), 353--380 (2015; Zbl 1411.91422)]
for the maximum cardinality stable matching problem with ties of size two, demonstrating an improved 4/3-approximation factor.Parameterized multi-scenario single-machine scheduling problems.https://www.zbmath.org/1453.680952021-02-27T13:50:00+00:00"Hermelin, Danny"https://www.zbmath.org/authors/?q=ai:hermelin.danny"Manoussakis, George"https://www.zbmath.org/authors/?q=ai:manoussakis.george"Pinedo, Michael"https://www.zbmath.org/authors/?q=ai:pinedo.michael-l"Shabtay, Dvir"https://www.zbmath.org/authors/?q=ai:shabtay.dvir"Yedidsion, Liron"https://www.zbmath.org/authors/?q=ai:yedidsion.lironSummary: We study a class of multi-scenario single-machine scheduling problems. In this class of problems, we are given a set of scenarios with each one having a different realization of job characteristics. We consider these multi-scenario problems where the scheduling criterion can be any one of the following three: The total weighted completion time, the weighted number of tardy jobs, and the weighted number of jobs completed exactly at their due-date. As all the resulting problems are NP-hard, our analysis focuses on whether any one of the problems becomes tractable when some specific natural parameters are of limited size. The analysis includes the following parameters: The number of jobs with scenario-dependent processing times, the number of jobs with scenario-dependent weights, and the number of different due-dates.The complexity of tree partitioning.https://www.zbmath.org/1453.681202021-02-27T13:50:00+00:00"An, Zhao"https://www.zbmath.org/authors/?q=ai:an.zhao"Feng, Qilong"https://www.zbmath.org/authors/?q=ai:feng.qilong"Kanj, Iyad"https://www.zbmath.org/authors/?q=ai:kanj.iyad-a"Xia, Ge"https://www.zbmath.org/authors/?q=ai:xia.geSummary: Given a tree \(T\) on \(n\) vertices, and \(k, b, s_1, \ldots, s_b \in \mathbb{N}\), the Tree Partitioning problem asks if at most \(k\) edges can be removed from \(T\) so that the resulting components can be grouped into \(b\) groups such that the number of vertices in group \(i\) is \(s_i\), for \(i = 1, \ldots, b\). The case where \(s_1 = \cdots = s_b = n/b\), referred to as the Balanced Tree Partitioning problem, was shown to be \(\mathcal{NP}\)-complete for trees of maximum degree at most 5, and the complexity of the problem for trees of maximum degree 4 and 3 was posed as an open question. The parameterized complexity of Balanced Tree Partitioning was also posed as an open question in another work. In this paper, we answer both open questions negatively. We show that Balanced Tree Partitioning (and hence, Tree Partitioning) is \(\mathcal{NP}\)-complete for trees of maximum degree 3, thus closing the door on the complexity of Balanced Tree Partitioning, as the simple case when \(T\) is a path is in \(\mathcal{P}\). In terms of the parameterized complexity of the problems, we show that both Balanced Tree Partitioning and Tree Partitioning are W[1]-complete parameterized by \(k\). Using a compact representation of the solution space for an instance of the problem, we present a dynamic programming algorithm for the weighted version of Tree Partitioning (and hence for that of Balanced Tree Partitioning) that runs in subexponential-time \(2^{O(\sqrt{n})}\), adding a natural problem to the list of problems that can be solved in subexponential time. Finally, we extend this subexponential-time algorithm to the Weighted Graph Partitioning problem on graphs of treewidth \(o(n / \lg{n})\), and we also show an application of this subexponential-time algorithm for approximating the Weighted Graph Partitioning problem.Independent set reconfiguration parameterized by modular-width.https://www.zbmath.org/1453.681242021-02-27T13:50:00+00:00"Belmonte, Rémy"https://www.zbmath.org/authors/?q=ai:belmonte.remy"Hanaka, Tesshu"https://www.zbmath.org/authors/?q=ai:hanaka.tesshu"Lampis, Michael"https://www.zbmath.org/authors/?q=ai:lampis.michael"Ono, Hirotaka"https://www.zbmath.org/authors/?q=ai:ono.hirotaka"Otachi, Yota"https://www.zbmath.org/authors/?q=ai:otachi.yotaSummary: Independent Set Reconfiguration is one of the most well-studied problems in the setting of combinatorial reconfiguration. It is known that the problem is PSPACE-complete even for graphs of bounded bandwidth. This fact rules out the tractability of parameterizations by most well-studied structural parameters as most of them generalize bandwidth. In this paper, we study the parameterization by modular-width, which is not comparable with bandwidth. We show that the problem parameterized by modular-width is fixed-parameter tractable under all previously studied rules \(\mathsf{TAR}\), \(\mathsf{TJ}\), and \(\mathsf{TS}\). The result under \(\mathsf{TAR}\) resolves an open problem posed by \textit{P. Bonsma} [J. Graph Theory 83, No. 2, 164--195 (2016; Zbl 1346.05209)].A \(3 + \Omega (1)\) lower bound for page migration.https://www.zbmath.org/1453.682202021-02-27T13:50:00+00:00"Matsubayashi, Akira"https://www.zbmath.org/authors/?q=ai:matsubayashi.akiraSummary: We address the page migration problem, one of the most classical online problems. In this problem, we are given online requests from nodes in a network for accessing a single page, i.e., a data set stored in a node, and asked to determine a node for the page to be stored in after each request. Serving a request costs the distance between the request and the page at the point of the request, and migrating the page costs the migration distance multiplied by the page size. The objective is to minimize the total sum of the service and migration costs. This problem is motivated by efficient cache management in multiprocessor systems. In this paper, we prove that no deterministic online page migration algorithm is \((3 + o(1))\)-competitive, where the \(o\)-notation is with respect to the page size. Our lower bound first breaks the barrier of 3 by an additive constant for an arbitrarily large page size and disproves Black and Sleator's conjecture even in the asymptotic sense.The approximability of multiple facility location on directed networks with random arc failures.https://www.zbmath.org/1453.682132021-02-27T13:50:00+00:00"Hassin, Refael"https://www.zbmath.org/authors/?q=ai:hassin.refael"Ravi, R."https://www.zbmath.org/authors/?q=ai:ravi.ramamoorthi"Salman, F. Sibel"https://www.zbmath.org/authors/?q=ai:salman.f-sibel"Segev, Danny"https://www.zbmath.org/authors/?q=ai:segev.dannySummary: We introduce and study the maximum reliability coverage problem, where multiple facilities are to be located on a network whose arcs are subject to random failures. Our model assumes that arcs fail independently with non-uniform probabilities, and the objective is to locate a given number of facilities, aiming to maximize the expected demand serviced. In this context, each demand point is said to be serviced (or covered) when it is reachable from at least one facility by an operational path. The main contribution of this paper is to establish tight bounds on the approximability of maximum reliability coverage on bidirected trees as well as on general networks. Quite surprisingly, we show that this problem is NP-hard on bidirected trees via a carefully-constructed reduction from the partition problem. On the positive side, we make use of approximate dynamic programming ideas to devise an FPTAS on bidirected trees. For general networks, while maximum reliability coverage is \((1 - 1/e + \epsilon)\)-inapproximable as an extension of the max \(k\)-cover problem, even estimating its objective value is \#P-complete, due to generalizing certain network reliability problems. Nevertheless, we prove that by plugging-in a sampling-based additive estimator into the standard greedy algorithm, a matching approximation ratio of \(1 - 1/e - \epsilon\) can be attained.An efficient sum query algorithm for distance-based locally dominating functions.https://www.zbmath.org/1453.680632021-02-27T13:50:00+00:00"Huang, Ziyun"https://www.zbmath.org/authors/?q=ai:huang.ziyun"Xu, Jinhui"https://www.zbmath.org/authors/?q=ai:xu.jinhuiSummary: In this paper, we consider the following sum query problem: Given a point set \(P\) in \(\mathbb{R}^d\), and a distance-based function \(f(p, q)\) (i.e., a function of the distance between \(p\) and \(q)\) satisfying some general properties, the goal is to develop a data structure and a query algorithm for efficiently computing a \((1 + \epsilon)\)-approximate solution to the sum \(\sum\nolimits_{p \in P} f(p, q)\) for any query point \(q \in \mathbb{R}^d\) and any small constant \(\epsilon > 0\). Existing techniques for this problem are mainly based on some core-set techniques which often have difficulties to deal with functions with local domination property. Based on several new insights to this problem, we develop in this paper a novel technique to overcome these encountered difficulties. Our algorithm is capable of answering queries with high success probability in time no more than \(\tilde{O}_{\epsilon, d}(n^{0.5 + c})\), and the underlying data structure can be constructed in \(\tilde{O}_{\epsilon, d}(n^{1 + c})\) time for any \(c > 0\), where the hidden constant has only polynomial dependence on \(1 / \epsilon\) and \(d\). Our technique is simple and can be easily implemented for practical purpose.Algorithms for commutative algebras over the rational numbers.https://www.zbmath.org/1453.130582021-02-27T13:50:00+00:00"Lenstra, H. W. jun."https://www.zbmath.org/authors/?q=ai:lenstra.hendrik-w-jun"Silverberg, A."https://www.zbmath.org/authors/?q=ai:silverberg.aliceSummary: The algebras considered in this paper are commutative rings of which the additive group is a finite-dimensional vector space over the field of rational numbers. We present deterministic polynomial-time algorithms that, given such an algebra, determine its nilradical, all of its prime ideals, as well as the corresponding localizations and residue class fields, its largest separable subalgebra, and its primitive idempotents. We also solve the discrete logarithm problem in the multiplicative group of the algebra. While deterministic polynomial-time algorithms were known earlier, our approach is different from previous ones. One of our tools is a primitive element algorithm; it decides whether the algebra has a primitive element and, if so, finds one, all in polynomial time. A methodological novelty is the use of derivations to replace a Hensel-Newton iteration. It leads to an explicit formula for lifting idempotents against nilpotents that is valid in any commutative ring.The higher topological complexity in digital images.https://www.zbmath.org/1453.550012021-02-27T13:50:00+00:00"İs, Melih"https://www.zbmath.org/authors/?q=ai:is.melih"Karaca, İsmet"https://www.zbmath.org/authors/?q=ai:karaca.ismetIn the framework of topological robotics, higher topological complexities have been introduced by \textit{Y. B. Rudyak} [Topology Appl. 157, No. 5, 916--920 (2010; Zbl 1187.55001)] which generalize Farber's topological complexity. In [Turk. J. Math. 42, No. 6, 3173--3181 (2018; Zbl 1438.68268)], the present authors introduced an analog of topological complexity in the framework of digital topology as a digital homotopy invariant of digital images. In the present article, the same authors define analogs of higher topological complexities in digital topology.
In addition, the authors provide a simple example showing that unlike in the topological setting, the zero-divisor cup length in digital cohomology provides no lower bound for digital topological complexity.
In Sections 2 and 3, the authors recall basic notions of digital topology and digital homotopy theory, respectively, in a detailed and mostly self-contained way. Section 4 contains the definition of digital higher topological complexities and the computation of some simple examples. The aforementioned cohomology example is given in Section 5, which is independent from the rest of the article. A concluding Section 6 summarizes the results.
Reviewer: Stephan Mescher (Lepizig)Algorithmically random series and Brownian motion.https://www.zbmath.org/1453.030432021-02-27T13:50:00+00:00"Potgieter, Paul"https://www.zbmath.org/authors/?q=ai:potgieter.paulSummary: We consider some random series parametrised by Martin-Löf random sequences. The simplest case is that of Rademacher series, independent of a time parameter. This is then extended to the case of Fourier series on the circle with Rademacher coefficients. Finally, a specific Fourier series which has coefficients determined by a computable function is shown to converge to an algorithmically random Brownian motion.Stability analysis of inline ZFP compression for floating-point data in iterative methods.https://www.zbmath.org/1453.650962021-02-27T13:50:00+00:00"Fox, Alyson"https://www.zbmath.org/authors/?q=ai:fox.alyson-l"Diffenderfer, James"https://www.zbmath.org/authors/?q=ai:diffenderfer.james-d"Hittinger, Jeffrey"https://www.zbmath.org/authors/?q=ai:hittinger.jeffrey-a-f"Sanders, Geoffrey"https://www.zbmath.org/authors/?q=ai:sanders.geoffrey"Lindstrom, Peter"https://www.zbmath.org/authors/?q=ai:lindstrom.peter-g