Recent zbMATH articles in MSC 68https://www.zbmath.org/atom/cc/682022-05-16T20:40:13.078697ZUnknown authorWerkzeugOne hundred prisoners and a lightbulbhttps://www.zbmath.org/1483.000052022-05-16T20:40:13.078697Z"Dehaye, Paul-Olivier"https://www.zbmath.org/authors/?q=ai:dehaye.paul-olivier"Ford, Daniel"https://www.zbmath.org/authors/?q=ai:ford.daniel-j|ford.daniel-k|ford.daniel-a"Segerman, Henry"https://www.zbmath.org/authors/?q=ai:segerman.henrySummary: This column is a place for those bits of contagious mathematics that travel from person to person in the community, because they are so elegant, suprising, or appealing that one has an urge to pass them on.Introduction to the special issue on learning, optimization, and theory of G-networkshttps://www.zbmath.org/1483.000432022-05-16T20:40:13.078697ZFrom the text: Thus, both the theoretical developments and the potential for new applications have played an important role in the development of G-Networks. This special issue includes papers which represent both of these trends toward more theory and also broader applications.Editorial. Special issue on new frontiers in data sciences and data analytics tools and applicationshttps://www.zbmath.org/1483.000442022-05-16T20:40:13.078697ZFrom the text: The special issue on new frontiers in data science and data analytics tools and applications comprises very selectively chosen papers. Each paper presents an approved approach in e-commerce, text classification, the Internet-of-Things (IoT), data transmission of IoT devices, deep learning, data mining, and intelligent decision support systems.Proof tactics for assertions in separation logichttps://www.zbmath.org/1483.030042022-05-16T20:40:13.078697Z"Hóu, Zhé"https://www.zbmath.org/authors/?q=ai:hou.zhe"Sanán, David"https://www.zbmath.org/authors/?q=ai:sanan.david"Tiu, Alwen"https://www.zbmath.org/authors/?q=ai:tiu.alwen-fernanto"Liu, Yang"https://www.zbmath.org/authors/?q=ai:liu.yang.10Summary: This paper presents tactics for reasoning about the assertions of separation logic. We formalise our proof methods in Isabelle/HOL based on \textit{G. Klein} et al.'s [Lect. Notes Comput. Sci. 7406, 332--337 (2012; Zbl 1360.68754)] separation algebra library. Our methods can also be used in other separation logic frameworks that are instances of the separation algebra of Calcagno et al. The first method, separata, is based on an embedding of a labelled sequent calculus for abstract separation logic (ASL) by \textit{Z. Hóu} et al. [Lect. Notes Comput. Sci. 9195, 501--516 (2015; Zbl 1465.03053)]. The second method, starforce, is a refinement of separata with specialised proof search strategies to deal with separating conjunction and magic wand. We also extend our tactics to handle pointers in the heap model, giving a third method sepointer. Our tactics can automatically prove many complex formulae. Finally, we give two case studies on the application of our tactics.
For the entire collection see [Zbl 1369.68009].Kinetic consistency and relevance in belief revisionhttps://www.zbmath.org/1483.030122022-05-16T20:40:13.078697Z"Peppas, Pavlos"https://www.zbmath.org/authors/?q=ai:peppas.pavlos"Williams, Mary-Anne"https://www.zbmath.org/authors/?q=ai:williams.mary-anneSummary: A critical aspect of rational belief revision that has been neglected by the classical AGM framework is what we call the principle of kinetic consistency. Loosely speaking, this principle dictates that the revision policies employed by a rational agent at different belief sets, are not independent, but ought to be related in a certain way. We formalise kinetic consistency axiomatically and semantically, and we establish a representation result explicitly connecting the two. We then combine the postulates for kinetic consistency, with Parikh's postulate for relevant change, and add them to the classical AGM postulates for revision; we call this augmented set the extended AGM postulates. We prove the consistency and demonstrate the scope of the extended AGM postulates by showing that a whole new class of concrete revision operators introduced hererin, called PD operators, satisfies all extended AGM postulates. PD operators are of interest in their own right as they are natural generalisations of Dalal's revision operator. We conclude the paper with some examples illustrating the strength of the extended AGM postulates, even for iterated revision scenarios.
For the entire collection see [Zbl 1350.68015].Characterizability in Horn belief revisionhttps://www.zbmath.org/1483.030142022-05-16T20:40:13.078697Z"Yaggie, Jon"https://www.zbmath.org/authors/?q=ai:yaggie.jon"Turán, György"https://www.zbmath.org/authors/?q=ai:turan.gyorgySummary: \textit{J. P. Delgrande} and \textit{P. Peppas} [Artif. Intell. 218, 1--22 (2015; Zbl 1322.68192)] characterized Horn belief revision operators obtained from Horn compliant faithful rankings by minimization, showing that a Horn belief revision operator belongs to this class if and only if it satisfies the Horn AGM postulates and the acyclicity postulate scheme. The acyclicity scheme has a postulate for every \(n\geq 3\) expressing the non-existence of a certain cyclic substructure. We show that this class of Horn belief revision operators cannot be characterized by finitely many postulates. Thus the use of infinitely many postulates in the result of Delgrande and Peppas [loc. cit.] is unavoidable. The proof uses our finite model theoretic approach to characterizability, considering universal monadic second-order logic with quantifiers over closed sets, and using predicates expressing minimality. We also give another non-characterizability result and add some remarks on strict Horn compliance.
For the entire collection see [Zbl 1350.68015].Prompt interval temporal logichttps://www.zbmath.org/1483.030152022-05-16T20:40:13.078697Z"Della Monica, Dario"https://www.zbmath.org/authors/?q=ai:della-monica.dario"Montanari, Angelo"https://www.zbmath.org/authors/?q=ai:montanari.angelo"Murano, Aniello"https://www.zbmath.org/authors/?q=ai:murano.aniello"Sala, Pietro"https://www.zbmath.org/authors/?q=ai:sala.pietroSummary: Interval temporal logics are expressive formalisms for temporal representation and reasoning, which use time intervals as primitive temporal entities. They have been extensively studied for the past two decades and successfully applied in AI and computer science. Unfortunately, they lack the ability of expressing promptness conditions, as it happens with the commonly-used temporal logics, e.g., LTL: whenever we deal with a liveness request, such as ``something good eventually happens'', there is no way to impose a bound on the delay with which it is fulfilled. In the last years, such an issue has been addressed in automata theory, game theory, and temporal logic. In this paper, we approach it in the interval temporal logic setting. First, we introduce PROMPT-PNL, a prompt extension of the well-studied interval temporal logic PNL, and we prove the undecidability of its satisfiability problem; then, we show how to recover decidability (NEXPTIME-completeness) by imposing a natural syntactic restriction on it.
For the entire collection see [Zbl 1350.68015].Conditional probability logic, lifted Bayesian networks, and almost sure quantifier eliminationhttps://www.zbmath.org/1483.030232022-05-16T20:40:13.078697Z"Koponen, Vera"https://www.zbmath.org/authors/?q=ai:koponen.veraThis paper tackles a prediction problem on finite but growing domains. To this end, lifted Bayesian networks are defined, which allow for the specification of probability distributions on arbitrary large domains. The conditional probabilities of the lifted Bayesian networks are defined by expressions of a conditional probability logic, \(CPL\). The key ingredient to the defined logic \(CPL\) is a rule, that allows one to express, that the conditional probability of some sentence given some other sentence is greater or equal than a threshold plus some other such conditional probability.
The main result of the paper is a theorem showing that for all sentences \(\varphi\) in a set of formulas of \(CPL\) there exist quantifier-free formulae \(\varphi^*\) which are almost surely equivalent to \(\varphi\). Furthermore, the \(\varphi^*\) do not depend on the size of the underlying domain and can be found by an effective algorithm. The run time of the algorithm is shown to be relatively low in many cases.
The structure of the paper is entirely standard. The first two sections introduce the topic, problem and main notation. The third section states the main results, which are proved in the fourth section. Section 5 concludes by discussing future possible works and some connections to related areas of research.
Reviewer: Jürgen Landes (München)The satisfiability problem for Boolean set theory with a choice correspondencehttps://www.zbmath.org/1483.030292022-05-16T20:40:13.078697Z"Cantone, Domenico"https://www.zbmath.org/authors/?q=ai:cantone.domenico"Giarlotta, Alfio"https://www.zbmath.org/authors/?q=ai:giarlotta.alfio"Watson, Stephen"https://www.zbmath.org/authors/?q=ai:watson.stephen-wSummary: Given a set \(U\) of alternatives, a choice (correspondence) on \(U\) is a contractive map \(c\) defined on a family \(\Omega\) of nonempty subsets of \(U\). Semantically, a choice \(c\) associates to each menu \(A\in\Omega\) a nonempty subset \(c(A)\subseteq A\) comprising all elements of \(A\) that are deemed selectable by an agent. A choice on \(U\) is total if its domain is the powerset of \(U\) minus the empty set, and partial otherwise. According to the theory of revealed preferences, a choice is rationalizable if it can be retrieved from a binary relation on \(U\) by taking all maximal elements of each menu. It is well-known that rationalizable choices are characterized by the satisfaction of suitable axioms of consistency, which codify logical rules of selection within menus. For instance, \textsf{WARP} (Weak Axiom of Revealed Preference) characterizes choices rationalizable by a transitive relation. Here we study the satisfiability problem for unquantified formulae of an elementary fragment of set theory involving a choice function symbol \(\mathsf{c}\), the Boolean set operators and the singleton, the equality and inclusion predicates, and the propositional connectives. In particular, we consider the cases in which the interpretation of \(\mathsf{c}\) satisfies any combination of two specific axioms of consistency, whose conjunction is equivalent to \textsf{WARP}. In two cases we prove that the related satisfiability problem is NP-complete, whereas in the remaining cases we obtain NP-completeness under the additional assumption that the number of choice terms is constant.
For the entire collection see [Zbl 1436.68017].A proof theory for model checking: an extended abstracthttps://www.zbmath.org/1483.030342022-05-16T20:40:13.078697Z"Heath, Quentin"https://www.zbmath.org/authors/?q=ai:heath.quentin"Miller, Dale"https://www.zbmath.org/authors/?q=ai:miller.dale-aSummary: While model checking has often been considered as a practical alternative to building formal proofs, we argue here that the theory of sequent calculus proofs can be used to provide an appealing foundation for model checking. Since the emphasis of model checking is on establishing the truth of a property in a model, we rely on the proof theoretic notion of additive inference rules, since such rules allow provability to directly describe truth conditions. Unfortunately, the additive treatment of quantifiers requires inference rules to have infinite sets of premises and the additive treatment of model descriptions provides no natural notion of state exploration. By employing a focused proof system, it is possible to construct large scale, synthetic rules that also qualify as additive but contain elements of multiplicative inference. These additive synthetic rules -- essentially rules built from the description of a model -- allow a direct treatment of state exploration. This proof theoretic framework provides a natural treatment of reachability and non-reachability problems, as well as tabled deduction, bisimulation, and winning strategies.
For the entire collection see [Zbl 1436.68021].Hierarchical partial planarityhttps://www.zbmath.org/1483.050432022-05-16T20:40:13.078697Z"Angelini, Patrizio"https://www.zbmath.org/authors/?q=ai:angelini.patrizio"Bekos, Michael A."https://www.zbmath.org/authors/?q=ai:bekos.michael-aSummary: In this paper we consider graphs whose edges are associated with a degree of importance, which may depend on the type of connections they represent or on how recently they appeared in the scene, in a streaming setting. The goal is to construct layouts in which the readability of an edge is proportional to its importance, that is, more important edges have fewer crossings. We formalize this problem and study the case in which there exist three different degrees of importance. We give a polynomial-time testing algorithm when the graph induced by the two most important sets of edges is biconnected. We also discuss interesting relationships with other constrained-planarity problems.
For the entire collection see [Zbl 1374.68006].On the relationship between \(k\)-planar and \(k\)-quasi-planar graphshttps://www.zbmath.org/1483.050442022-05-16T20:40:13.078697Z"Angelini, Patrizio"https://www.zbmath.org/authors/?q=ai:angelini.patrizio"Bekos, Michael A."https://www.zbmath.org/authors/?q=ai:bekos.michael-a"Brandenburg, Franz J."https://www.zbmath.org/authors/?q=ai:brandenburg.franz-josef"Da Lozzo, Giordano"https://www.zbmath.org/authors/?q=ai:da-lozzo.giordano"Di Battista, Giuseppe"https://www.zbmath.org/authors/?q=ai:di-battista.giuseppe"Didimo, Walter"https://www.zbmath.org/authors/?q=ai:didimo.walter"Liotta, Giuseppe"https://www.zbmath.org/authors/?q=ai:liotta.giuseppe"Montecchiani, Fabrizio"https://www.zbmath.org/authors/?q=ai:montecchiani.fabrizio"Rutter, Ignaz"https://www.zbmath.org/authors/?q=ai:rutter.ignazSummary: A graph is \(k\)-planar \((k\geq 1)\) if it can be drawn in the plane such that no edge is crossed \(k+1\) times or more. A graph is \(k\)-quasi-planar \((k\geq 2)\) if it can be drawn in the plane with no \(k\) pairwise crossing edges. The families of \(k\)-planar and \(k\)-quasi-planar graphs have been widely studied in the literature, and several bounds have been proven on their edge density. Nonetheless, only trivial results are known about the relationship between these two graph families. In this paper we prove that, for \(k\geq 3\), every \(k\)-planar graph is \((k+1)\)-quasi-planar.
For the entire collection see [Zbl 1374.68006].Homothetic triangle contact representationshttps://www.zbmath.org/1483.050492022-05-16T20:40:13.078697Z"Schrezenmaier, Hendrik"https://www.zbmath.org/authors/?q=ai:schrezenmaier.hendrikSummary: We prove that every 4-connected planar triangulation admits a contact representation by homothetic triangles.
There is a known proof of this result that is based on the Convex Packing theorem by Schramm, a general result about contact representations of planar triangulations by convex shapes. But our approach makes use of the combinatorial structure of triangle contact representations in terms of Schnyder woods. We start with an arbitrary Schnyder wood and produce a sequence of Schnyder woods via face flips. We show that at some point the sequence has to reach a Schnyder wood describing a representation by homothetic triangles.
For the entire collection see [Zbl 1374.68006].The vertex Turán density in 3-ary \(n\)-cubeshttps://www.zbmath.org/1483.050832022-05-16T20:40:13.078697Z"Li, Xiao-Chen"https://www.zbmath.org/authors/?q=ai:li.xiaochen"Hao, Rong-Xia"https://www.zbmath.org/authors/?q=ai:hao.rongxiaSummary: The \(k\)-ary \(n\)-cube, denoted \(Q_n^k\), has the vertex set \(V_n^k = \{ 0 , 1 , \ldots , k - 1 \}^n\), and two vertices \(( x_1 , x_2 , x_3 , \ldots , x_n )\) and \(( y_1 , y_2 , y_3 , \ldots , y_n )\) in \(V_n^k\) are adjacent if and only if there is an integer \(j\) with \(1 \leq j \leq n\) such that \(x_j \equiv y_j \pm 1 \pmod k\) and \(x_i = y_i\) for all \(i \in \{ 1 , 2 , \ldots , n \} \setminus \{ j \}\). Let \(\lambda ( \mathcal{F} )\) be the vertex Turán density of a graph \(G\) for a family of forbidden configurations \(\mathcal{F} \). Let \(F_1 = \{ ( 0 , 0 , 0 ) , ( 1 , 0 , 0 ) , ( 0 , 1 , 0 ) , ( 0 , 0 , 1 ) \}\), \(F_2 = \{ ( 0 , 0 , 0 ) , ( 1 , 1 , 0 ) , ( 1 , 0 , 1 ) , ( 0 , 1 , 1 ) \}\), \(F_3 = \{ ( 0 , 0 , 0 ) , ( 1 , 1 , 1 ) \}\), \(\mathcal{F}_{d , t , k} = \{ F \subseteq V_d^k : | F | = t \} \), and \(\mu ( d , k ) =\) max \(\{ t : \lambda ( \mathcal{F}_{d , t , k} ) = 0 \} \). \textit{J. R. Johnson} and \textit{J. Talbot} [J. Comb. Theory, Ser. A 117, No. 4, 454--465 (2010; Zbl 1230.05176)] determined \(\lambda ( F_i )\) for \(i \in \{ 1 , 2 , 3 \}\) and the bounds of \(\mu ( d , 2 )\) in \(Q_n^2\) (where \(Q_n^2\) is the \(n\)-dimensional hypercube \(Q_n)\).
In this paper, we derive the exact value \(\lambda ( F_i )\) for \(i \in \{ 1 , 2 , 3 \}\) in \(Q_n^3\) and determine the bounds of \(\mu ( d , k )\) in \(Q_n^k\) for any \(k \geq 3\). Furthermore, we consider the forbidden configurations \(D_i = \{ x \in V_3^k : | x | = 0 \operatorname{or} | x | = i \} \). The bounds of \(\lambda ( D_i )\) in \(Q_n^k\) are derived for any \(i \in \{ 1 , 2 , 3 \}\) and \(k \geq 2\), and \(\lambda ( G_d ) = 2 / 3\) in \(Q_n^3\) is derived, where \(G_d = \{ x \in V_d^3 : | x | = 0 , ( | x | = 1 , \operatorname{qsupp} ( x ) = i^{[ 1 ]} ) \text{ and } ( | x | = 2 , \operatorname{qsupp} ( x ) = \{ i^{[ 1 ]} , j^{[ 1 ]} \} , i \not\equiv j \pmod 2 ) \} \) (see Definition 1 for \(\operatorname{qsupp} ( x ))\).Maximum bipartite subgraphs in graphs without short cycleshttps://www.zbmath.org/1483.050852022-05-16T20:40:13.078697Z"Lin, Jing"https://www.zbmath.org/authors/?q=ai:lin.jing"Zeng, Qinghou"https://www.zbmath.org/authors/?q=ai:zeng.qinghouSummary: Given a graph \(G\), let \(f(G)\) denote the maximum number of edges in a bipartite subgraph of \(G\). Given a set \(\mathcal{H}\) of graphs and a positive integer \(m\), let \(f(m, \mathcal{H})\) denote the minimum possible cardinality of \(f(G)\), as \(G\) ranges over all graphs on \(m\) edges that contains no member of \(\mathcal{H}\) as a subgraph. Suppose that \(r \geq 10\) is an even integer and \(k \geq 2\) is an integer. In this paper, we prove that there is a constant \(c(r) > 0\) such that \(f \left(m, \{C_6, C_7, \ldots, C_{r-1}\}\right) \geq m/2 + c(r) m^{r/(r+1)}\) and there is a constant \(c(k) > 0\) such that \(f \left(m, \{C_4, C_6, \ldots, C_{2 k}, C_{2k+1}\}\right) \geq m/2 + c(k) m^{(2k+2)/(2k+3)}\), both of which improve a result of \textit{N. Alon} et al. [J. Comb. Theory, Ser. B 88, No. 2, 329--346 (2003; Zbl 1030.05060)].Intersection graphs of rays and grounded segmentshttps://www.zbmath.org/1483.051052022-05-16T20:40:13.078697Z"Cardinal, Jean"https://www.zbmath.org/authors/?q=ai:cardinal.jean"Felsner, Stefan"https://www.zbmath.org/authors/?q=ai:felsner.stefan"Miltzow, Tillmann"https://www.zbmath.org/authors/?q=ai:miltzow.tillmann"Tompkins, Casey"https://www.zbmath.org/authors/?q=ai:tompkins.casey"Vogtenhuber, Birgit"https://www.zbmath.org/authors/?q=ai:vogtenhuber.birgitSummary: We consider several classes of intersection graphs of line segments in the plane and prove new equality and separation results between those classes. In particular, we show that:{\parindent = 0.5 cm \begin{itemize} \item[--]intersection graphs of grounded segments and intersection graphs of downward rays form the same graph class,\item [--]not every intersection graph of rays is an intersection graph of downward rays, and \item [--]not every outer segment graph is an intersection graph of rays.
\end{itemize}}The first result answers an open problem posed by Cabello and Jejčič. The third result confirms a conjecture by Cabello. We thereby completely elucidate the remaining open questions on the containment relations between these classes of segment graphs. We further characterize the complexity of the recognition problems for the classes of outer segment, grounded segment, and ray intersection graphs. We prove that these recognition problems are complete for the existential theory of the reals. This holds even if a 1-string realization is given as additional input.
For the entire collection see [Zbl 1374.68006].Incidence dimension and 2-packing number in graphshttps://www.zbmath.org/1483.051132022-05-16T20:40:13.078697Z"Božović, Dragana"https://www.zbmath.org/authors/?q=ai:bozovic.dragana"Kelenc, Aleksander"https://www.zbmath.org/authors/?q=ai:kelenc.aleksander"Peterin, Iztok"https://www.zbmath.org/authors/?q=ai:peterin.iztok"Yero, Ismael G."https://www.zbmath.org/authors/?q=ai:yero.ismael-gSummary: Let \(G = (V, E)\) be a graph. A set of vertices \(A\) is an incidence generator for \(G\) if for any two distinct edges \(e, f \in E(G)\) there exists a vertex from \(A\) which is an endpoint of either \(e\) or \(f\). The smallest cardinality of an incidence generator for \(G\) is called the incidence dimension and is denoted by \(\mathrm{dim}_I (G)\). A set of vertices \(P \subseteq V(G)\) is a 2-packing of \(G\) if the distance in \(G\) between any pair of distinct vertices from \(P\) is larger than two. The largest cardinality of a 2-packing of \(G\) is the packing number of \(G\) and is denoted by \(\rho (G)\). In this article, the incidence dimension is introduced and studied. The given results show a close relationship between \(\mathrm{dim}_I (G)\) and \(\rho (G)\). We first note that the complement of any 2-packing in graph \(G\) is an incidence generator for \(G\), and further show that either \(\mathrm{dim}_I (G) = |V(G)-|\rho (G)\) or \(\mathrm{dim}_I (G) = |V(G)-|\rho (G) - 1\) for any graph \(G\). In addition, we present some bounds for \(\mathrm{dim}_I (G)\) and prove that the problem of determining the incidence dimension of a graph is NP-hard.Starred Italian domination in graphshttps://www.zbmath.org/1483.051142022-05-16T20:40:13.078697Z"Cabrera Martinez, Abel"https://www.zbmath.org/authors/?q=ai:cabrera-martinez.abelSummary: An Italian dominating function on a graph \(G\) is a function \(f:V(G)\rightarrow \{0,1,2\}\) such that \(\sum_{u\in N(v)}f(u)\geq 2\) for every vertex \(v\in V_0\), where \(V_0=\{v\in V(G) : f(v)=0\}\) and \(N(v)\) represents the open neighbourhood of \(v\). A starred Italian dominating function on \(G\) is an Italian dominating function \(f\) such that \(V_0\) is not a dominating set of \(G\). The starred Italian domination number of \(G\), denoted \(\gamma_I^*(G)\), is the minimum weight \(\omega(f)=\sum_{v\in V(G)}f(v)\) among all starred Italian dominating functions \(f\) on \(G\). In this article, we initiate the study of the starred Italian domination in graphs. For instance, we give some relationships that exist between this parameter and other domination invariants in graphs. Also, we present tight bounds and characterize the extreme cases. In addition, we obtain exact formulas for some particular families of graphs. Finally, we show that the problem of computing the starred Italian domination number of a graph is NP-hard.Complexity issues of perfect secure domination in graphshttps://www.zbmath.org/1483.051152022-05-16T20:40:13.078697Z"Chakradhar, P."https://www.zbmath.org/authors/?q=ai:chakradhar.padamutham"Reddy, P. Venkata Subba"https://www.zbmath.org/authors/?q=ai:reddy.p-venkata-subbaSummary: Let \(G = (V, E)\) be a simple, undirected and connected graph. A dominating set \(S\) is called a secure dominating set if for each \(u \in V\setminus S\), there exists \(v \in S\) such that \((u, v) \in E\) and \((S \setminus \{v\}) \cup \{u\}\) is a dominating set of \(G\). If further the vertex \(v \in S\) is unique, then \(S\) is called a perfect secure dominating set (PSDS). The perfect secure domination number \(\gamma_{ps}(G)\) is the minimum cardinality of a perfect secure dominating set of \(G\). Given a graph \(G\) and a positive integer \(k\), the perfect secure domination (PSDOM) problem is to check whether \(G\) has a PSDS of size at most \(k\). In this paper, we prove that PSDOM problem is NP-complete for split graphs, star convex bipartite graphs, comb convex bipartite graphs, planar graphs and dually chordal graphs. We propose a linear time algorithm to solve the PSDOM problem in caterpillar trees and also show that this problem is linear time solvable for bounded tree-width graphs and threshold graphs, a subclass of split graphs. Finally, we show that the domination and perfect secure domination problems are not equivalent in computational complexity aspects.Semitotal domination subdivision numbers of graphshttps://www.zbmath.org/1483.051162022-05-16T20:40:13.078697Z"Chen, Qin"https://www.zbmath.org/authors/?q=ai:chen.qin"Tang, Yunfang"https://www.zbmath.org/authors/?q=ai:tang.yunfangA set \(D\) of vertices in an isolate-free graph \(G\) is a semitotal dominating set of \(G\) if \(D\) is a dominating set of \(G\) and every vertex in \(D\) is within distance \(2\) from another vertex of \(D\). The semitotal domination number of \(G\) is the minimum cardinality of a semitotal dominating set of \(G\) and is denoted by \(\gamma_{t2}(G)\). The semitotal domination subdivision number of a graph \(G\) is the minimum number of edges that must be subdivided in order to increase the semitotal domination number. The authors of this paper show that the decision problem for the semitotal domination subdivision number is NP-complete. Also they provide some upper bounds for this parameter.
Reviewer: Saeid Alikhani (Yazd)The parameterized complexity of the equidomination problemhttps://www.zbmath.org/1483.051242022-05-16T20:40:13.078697Z"Schaudt, Oliver"https://www.zbmath.org/authors/?q=ai:schaudt.oliver"Senger, Fabian"https://www.zbmath.org/authors/?q=ai:senger.fabianSummary: A graph \(G=(V,E)\) is called equidominating if there exists a value \(t \in \mathbb{N}\) and a weight function \(\omega: V \to \mathbb{N}\) such that the total weight of a subset \(D\subseteq V\) is equal to \(t\) if and only if \(D\) is a minimal dominating set. To decide whether or not a given graph is equidominating is referred to as the Equidomination problem.
In this paper we show that two parameterized versions of the Equidomination problem are fixed-parameter tractable: the first parameterization considers the target value \(t\) leading to the Target-\(t\) Equidomination problem. The second parameterization allows only weights up to a value \(k\), which yields the \(k\)-Equidomination problem.
In addition, we characterize the graphs whose every induced subgraph is equidominating. We give a finite forbidden induced subgraph characterization and derive a fast recognition algorithm.
For the entire collection see [Zbl 1374.68006].Hausdorff and Wasserstein metrics on graphs and other structured datahttps://www.zbmath.org/1483.051312022-05-16T20:40:13.078697Z"Patterson, Evan"https://www.zbmath.org/authors/?q=ai:patterson.evanSummary: Optimal transport is widely used in pure and applied mathematics to find probabilistic solutions to hard combinatorial matching problems. We extend the Wasserstein metric and other elements of optimal transport from the matching of sets to the matching of graphs and other structured data. This structure-preserving form of optimal transport relaxes the usual notion of homomorphism between structures. It applies to graphs -- directed and undirected, labeled and unlabeled -- and to any other structure that can be realized as a \textsf{C}-set for some finitely presented category \textsf{C}. We construct both Hausdorff-style and Wasserstein-style metrics on \textsf{C}-sets, and we show that the latter are convex relaxations of the former. Like the classical Wasserstein metric, the Wasserstein metric on \textsf{C}-sets is the value of a linear program and is therefore efficiently computable.On local super antimagic total face coloring and the application in developing a cipher block chaining keyhttps://www.zbmath.org/1483.051402022-05-16T20:40:13.078697Z"Dafik"https://www.zbmath.org/authors/?q=ai:dafik.d"Nisviasari, R."https://www.zbmath.org/authors/?q=ai:nisviasari.r"Maryati, T. K."https://www.zbmath.org/authors/?q=ai:maryati.tita-khalis"Agustin, I. H."https://www.zbmath.org/authors/?q=ai:agustin.ika-hesti"Venkatachalam, M."https://www.zbmath.org/authors/?q=ai:venkatachalam.mathiyazhaganSummary: There is a specific complexity in the use of IOT, especially in maintaining a secure data transaction. We need a good cryptosystem, since the best encryption key relays on the management cryptosystem. The biggest problem, then, is how to encrypt the plaintext into a ciphertext as hard as possible. We attempt to use the local super antimagic total face coloring of graph in developing a cipher block chaining key. By a local super antimagic total face coloring, we mean a bijection from the set of vertices, edges, and faces to \(\{1, 2, 3, \dots, |V(G)|+|E(G)|+|F(G)|\}\) such that any adjacent two faces \(f_1\) and \(f_2\) will receive a different weights \(w(f_1) \neq w(f_2)\) for \(f_1, f_2 \in F(G)\). If we use the weights as the color of all faces, the local super antimagic total face labeling is said to be a local super antimagic total face coloring. The minimum number possible to color all faces is called the local antimagic total face chromatic number, denoted by \(\gamma latf (G)\). Once, this resulting coloring in hand, we can develop a cipher block chaining key.Prime labeling in the context of web graphs without centerhttps://www.zbmath.org/1483.051412022-05-16T20:40:13.078697Z"Kansagara, A. N."https://www.zbmath.org/authors/?q=ai:kansagara.ankur-n"Patel, S. K."https://www.zbmath.org/authors/?q=ai:patel.sanjay-kantibhaiSummary: A prime labeling on a graph \(G\) of order \(n\) is a bijection from the set of vertices of \(G\) into the set of first \(n\) positive integers such that any two adjacent vertices in \(G\) have relatively prime labels. In this paper we investigate prime labeling for web graphs without center and graphs resulting from the subdivision of some specific edges in it. We also derive some interesting results on prime labeling of union of web graph without center with wheel graph, generalized Jahangir graph and drum graph.Fractional matching preclusion number of graphshttps://www.zbmath.org/1483.051692022-05-16T20:40:13.078697Z"Zou, Jinyu"https://www.zbmath.org/authors/?q=ai:zou.jinyu"Mao, Yaping"https://www.zbmath.org/authors/?q=ai:mao.yaping"Wang, Zhao"https://www.zbmath.org/authors/?q=ai:wang.zhao"Cheng, Eddie"https://www.zbmath.org/authors/?q=ai:cheng.eddieSummary: The fractional matching preclusion number of a graph \(G\), denoted by \(\operatorname{fmp}( G )\), is the minimum number of edges whose deletion results in a graph that has no fractional perfect matchings. In this paper, we first give some sharp upper and lower bounds of fractional matching preclusion number. Next, graphs with large and small fractional matching preclusion number are characterized, respectively. In the end, we investigate some extremal problems on fractional matching preclusion number.Critical node cut parameterized by treewidth and solution size is \(W[1]\)-hardhttps://www.zbmath.org/1483.051702022-05-16T20:40:13.078697Z"Agrawal, Akanksha"https://www.zbmath.org/authors/?q=ai:agrawal.akanksha"Lokshtanov, Daniel"https://www.zbmath.org/authors/?q=ai:lokshtanov.daniel"Mouawad, Amer E."https://www.zbmath.org/authors/?q=ai:mouawad.amer-eSummary: In the Critical Node Cut problem, given an undirected graph \(G\) and two non-negative integers \(k\) and \(\mu\), the goal is to find a set \(S\) of exactly \(k\) vertices such that after deleting \(S\) we are left with at most \(\mu\) connected pairs of vertices. Back in 2015, \textit{D. Hermelin} et al. [Theor. Comput. Sci. 651, 62--75 (2016; Zbl 1359.68135)] studied the aforementioned problem under the framework of parameterized complexity. They considered various natural parameters, namely, the size \(k\) of the desired solution, the upper bound \(\mu\) on the number of remaining connected pairs, the lower bound \(b\) on the number of connected pairs to be removed, and the treewidth \(\mathsf{tw}(G)\) of the input graph \(G\). For all but one combinations of the above parameters, they determined whether \textsc{Critical Node Cut} is fixed-parameter tractable and whether it admits a polynomial kernel. The only question they left open is whether the problem remains fixed-parameter tractable when parameterized by \(k+\mathsf{tw}(G)\). We answer this question in the negative via a new problem of independent interest, which we call SumCSP. We believe that SumCSP can be a useful starting point for showing hardness results of the same nature, i.e. when the treewidth of the graph is part of the parameter.
For the entire collection see [Zbl 1374.68006].Uniquely restricted matchings and edge coloringshttps://www.zbmath.org/1483.051712022-05-16T20:40:13.078697Z"Baste, Julien"https://www.zbmath.org/authors/?q=ai:baste.julien"Rautenbach, Dieter"https://www.zbmath.org/authors/?q=ai:rautenbach.dieter"Sau, Ignasi"https://www.zbmath.org/authors/?q=ai:sau.ignasiSummary: A matching in a graph is uniquely restricted if no other matching covers exactly the same set of vertices. This notion was defined by \textit{M. C. Golumbic} et al. [Algorithmica 31, No. 2, 139--154 (2001; Zbl 0980.68084)] and studied in a number of articles. Our contribution is twofold. We provide approximation algorithms for computing a uniquely restricted matching of maximum size in some bipartite graphs. In particular, we achieve a ratio of 5/9 for subcubic bipartite graphs, improving over a 1/2-approximation algorithm proposed by Mishra. Furthermore, we study the uniquely restricted chromatic index of a graph, defined as the minimum number of uniquely restricted matchings into which its edge set can be partitioned. We provide tight upper bounds in terms of the maximum degree and characterize all extremal graphs. Our constructive proofs yield efficient algorithms to determine the corresponding edge colorings.
For the entire collection see [Zbl 1374.68006].Defective coloring on classes of perfect graphshttps://www.zbmath.org/1483.051722022-05-16T20:40:13.078697Z"Belmonte, Rémy"https://www.zbmath.org/authors/?q=ai:belmonte.remy"Lampis, Michael"https://www.zbmath.org/authors/?q=ai:lampis.michael"Mitsou, Valia"https://www.zbmath.org/authors/?q=ai:mitsou.valiaSummary: In defective coloring we are given a graph \(G\) and two integers \({\chi_d},\varDelta^*\) and are asked if we can \({\chi_d}\)-color \(G\) so that the maximum degree induced by any color class is at most \(\varDelta^*\). We show that this natural generalization of coloring is much harder on several basic graph classes. In particular, we show that it is NP-hard on split graphs, even when one of the two parameters \({\chi_d},\varDelta^*\) is set to the smallest possible fixed value that does not trivialize the problem (\({\chi_d}=2\) or \(\varDelta^*=1\)). Together with a simple treewidth-based DP algorithm this completely determines the complexity of the problem also on chordal graphs.
We then consider the case of cographs and show that, somewhat surprisingly, defective coloring turns out to be one of the few natural problems which are NP-hard on this class. We complement this negative result by showing that defective coloring is in P for cographs if either \({\chi_d}\) or \(\varDelta^*\) is fixed; that it is in P for trivially perfect graphs; and that it admits a sub-exponential time algorithm for cographs when both \({\chi_d}\) and \(\varDelta^*\) are unbounded.
For the entire collection see [Zbl 1374.68006].Computing maximum cliques in \(B_2\)-EPG graphshttps://www.zbmath.org/1483.051742022-05-16T20:40:13.078697Z"Bousquet, Nicolas"https://www.zbmath.org/authors/?q=ai:bousquet.nicolas"Heinrich, Marc"https://www.zbmath.org/authors/?q=ai:heinrich.marcSummary: EPG graphs, introduced by \textit{M. C. Golumbic} et al. in 2009 [Networks 54, No. 3, 130--138 (2009; Zbl 1208.05090); errata and addenda ibid. 62, No. 2, 161--163 (2013)], are edge-intersection graphs of paths on an orthogonal grid. The class \(B_k\)-EPG is the subclass of EPG graphs where the path on the grid associated to each vertex has at most \(k\) bends. \textit{D. Epstein} et al. showed in 2013 [Lect. Notes Comput. Sci. 8037, 328--340 (2013; Zbl 1390.68499)] that computing a maximum clique in \(B_1\)-EPG graphs is polynomial. As remarked in [\textit{D. Heldt} et al., Discrete Appl. Math. 167, 144--162 (2014; Zbl 1284.05180)], when the number of bends is at least 4, the class contains 2-interval graphs for which computing a maximum clique is an NP-hard problem. The complexity status of the Maximum Clique problem remains open for \(B_2\) and \(B_3\)-EPG graphs. In this paper, we show that we can compute a maximum clique in polynomial time in \(B_2\)-EPG graphs given a representation of the graph.
Moreover, we show that a simple counting argument provides a \({2(k+1)}\)-approximation for the coloring problem on \(B_k\)-EPG graphs without knowing the representation of the graph. It generalizes a result of [Epstein et al., loc. cit.] on \(B_1\)-EPG graphs (where the representation was needed).
For the entire collection see [Zbl 1374.68006].On \(H\)-topological intersection graphshttps://www.zbmath.org/1483.051752022-05-16T20:40:13.078697Z"Chaplick, Steven"https://www.zbmath.org/authors/?q=ai:chaplick.steven"Töpfer, Martin"https://www.zbmath.org/authors/?q=ai:topfer.martin"Voborník, Jan"https://www.zbmath.org/authors/?q=ai:vobornik.jan"Zeman, Peter"https://www.zbmath.org/authors/?q=ai:zeman.peterSummary: \textit{M. Biró} et al. introduced the concept of \(H\)-graphs [Discrete Math. 100, No. 1-3, 267--279 (1992; Zbl 0766.05026)], intersection graphs of connected subgraphs of a subdivision of a graph \(H\). They naturally generalize many important classes of graphs, e.g., interval graphs and circular-arc graphs. Our paper is the first study of the recognition and dominating set problems of this large collection of intersection classes of graphs.
We negatively answer the question of Biró, Hujter, and Tuza [loc. cit.] who asked whether \(H\)-graphs can be recognized in polynomial time, for a fixed graph \(H\). Namely, we show that recognizing \(H\)-graphs is \(\mathsf {NP}\)-complete if \(H\) contains the diamond graph as a minor. On the other hand, for each tree \(T\), we give a polynomial-time algorithm for recognizing \(T\)-graphs and an \(\mathcal {O}(n^4)\)-time algorithm for recognizing \(K_{1,d}\)-graphs. For the dominating set problem (parameterized by the size of \(H\)), we give \(\mathsf {FPT}\)- and \(\mathsf {XP}\)-time algorithms on \(K_{1,d}\)-graphs and \(H\)-graphs, respectively. Our dominating set algorithm for \(H\)-graphs also provides \(\mathsf {XP}\)-time algorithms for the independent set and independent dominating set problems on \(H\)-graphs.
For the entire collection see [Zbl 1374.68006].The hardness of embedding grids and wallshttps://www.zbmath.org/1483.051762022-05-16T20:40:13.078697Z"Chen, Yijia"https://www.zbmath.org/authors/?q=ai:chen.yijia"Grohe, Martin"https://www.zbmath.org/authors/?q=ai:grohe.martin"Lin, Bingkai"https://www.zbmath.org/authors/?q=ai:lin.bingkaiSummary: The dichotomy conjecture for the parameterized embedding problem states that the problem of deciding whether a given graph \(G\) from some class \(\mathcal K\) of ``pattern graphs'' can be embedded into a given graph \(H\) (that is, is isomorphic to a subgraph of \(H\)) is fixed-parameter tractable if \(\mathcal K\) is a class of graphs of bounded tree width and \(W [1]\)-complete otherwise.
Towards this conjecture, we prove that the embedding problem is \(W [1]\)-complete if \(\mathcal K\) is the class of all grids or the class of all walls.
For the entire collection see [Zbl 1374.68006].Approximately coloring graphs without long induced pathshttps://www.zbmath.org/1483.051772022-05-16T20:40:13.078697Z"Chudnovsky, Maria"https://www.zbmath.org/authors/?q=ai:chudnovsky.maria"Schaudt, Oliver"https://www.zbmath.org/authors/?q=ai:schaudt.oliver"Spirkl, Sophie"https://www.zbmath.org/authors/?q=ai:spirkl.sophie-theresa"Stein, Maya"https://www.zbmath.org/authors/?q=ai:stein.maya-jakobine"Zhong, Mingxian"https://www.zbmath.org/authors/?q=ai:zhong.mingxianSummary: It is an open problem whether the 3-coloring problem can be solved in polynomial time in the class of graphs that do not contain an induced path on \(t\) vertices, for fixed \(t\). We propose an algorithm that, given a 3-colorable graph without an induced path on \(t\) vertices, computes a coloring with \(\max \left\{ 5,2\left\lceil \frac{t-1}{2}\right\rceil -2\right\} \) many colors. If the input graph is triangle-free, we only need \(\max \left\{ 4,\left\lceil \frac{t-1}{2}\right\rceil +1\right\} \) many colors. The running time of our algorithm is \(O((3^{t-2}+t^2)m+n)\) if the input graph has \(n\) vertices and \(m\) edges.
For the entire collection see [Zbl 1374.68006].Clique-width and well-quasi-ordering of triangle-free graph classeshttps://www.zbmath.org/1483.051782022-05-16T20:40:13.078697Z"Dabrowski, Konrad K."https://www.zbmath.org/authors/?q=ai:dabrowski.konrad-kazimierz"Lozin, Vadim V."https://www.zbmath.org/authors/?q=ai:lozin.vadim-v"Paulusma, Daniël"https://www.zbmath.org/authors/?q=ai:paulusma.danielSummary: \textit{J. Daligault} et al. [Order 27, No. 3, 301--315 (2010; Zbl 1209.05210)] asked whether every hereditary graph class that is well-quasi-ordered by the induced subgraph relation has bounded clique-width. \textit{V. V. Lozin} et al. [Lect. Notes Comput. Sci. 9224, 351--359 (2016; Zbl 1417.05155)] gave a negative answer to this question, but their counterexample is a class that can only be characterised by infinitely many forbidden induced subgraphs. This raises the issue of whether their question has a positive answer for finitely defined hereditary graph classes. Apart from two stubborn cases, this has been confirmed when at most two induced subgraphs \(H_1,H_2\) are forbidden. We confirm it for one of the two stubborn cases, namely for the case \((H_1,H_2)=(\mathrm {triangle},P_2+P_4)\) by proving that the class of \((\mathrm {triangle},P_2+P_4)\)-free graphs has bounded clique-width and is well-quasi-ordered. Our technique is based on a special decomposition of 3-partite graphs. We also use this technique to completely determine which classes of \((\mathrm {triangle},H)\)-free graphs are well-quasi-ordered.
For the entire collection see [Zbl 1374.68006].The minimum shared edges problem on grid-like graphshttps://www.zbmath.org/1483.051792022-05-16T20:40:13.078697Z"Fluschnik, Till"https://www.zbmath.org/authors/?q=ai:fluschnik.till"Hatzel, Meike"https://www.zbmath.org/authors/?q=ai:hatzel.meike"Härtlein, Steffen"https://www.zbmath.org/authors/?q=ai:hartlein.steffen"Molter, Hendrik"https://www.zbmath.org/authors/?q=ai:molter.hendrik"Seidler, Henning"https://www.zbmath.org/authors/?q=ai:seidler.henningSummary: We study the \({\mathsf {NP}}\)-hard Minimum Shared Edges (MSE) problem on graphs: decide whether it is possible to route \(p\) paths from a start vertex to a target vertex in a given graph while using at most \(k\) edges more than once. We show that MSE can be decided on bounded (i.e. finite) grids in linear time when both dimensions are either small or large compared to the number \(p\) of paths. On the contrary, we show that MSE remains \({\mathsf {NP}}\)-hard on subgraphs of bounded grids.
Finally, we study MSE from a parametrised complexity point of view. It is known that MSE is fixed-parameter tractable with respect to the number \(p\) of paths. We show that, under standard complexity-theoretical assumptions, the problem parametrised by the combined parameter \(k,p\), maximum degree, diameter, and treewidth does not admit a polynomial-size problem kernel, even when restricted to planar graphs.
For the entire collection see [Zbl 1374.68006].Enumeration and maximum number of maximal irredundant sets for chordal graphshttps://www.zbmath.org/1483.051802022-05-16T20:40:13.078697Z"Golovach, Petr A."https://www.zbmath.org/authors/?q=ai:golovach.petr-a"Kratsch, Dieter"https://www.zbmath.org/authors/?q=ai:kratsch.dieter"Liedloff, Mathieu"https://www.zbmath.org/authors/?q=ai:liedloff.mathieu"Sayadi, Mohamed Yosri"https://www.zbmath.org/authors/?q=ai:sayadi.mohamed-yosriSummary: In this paper we provide exponential-time algorithms to enumerate the maximal irredundant sets of chordal graphs and two of their subclasses. We show that the maximum number of maximal irredundant sets of a chordal graph is at most \(1.7549^n\), and these can be enumerated in time \(O(1.7549^n)\). For interval graphs, we achieve the better upper bound of \(1.6957^n\) for the number of maximal irredundant sets and we show that they can be enumerated in time \(O(1.6957^n)\). Finally, we show that forests have at most \(1.6181^n\) maximal irredundant sets that can be enumerated in time \(O(1.6181^n)\). We complement the latter result by providing a family of forests having at least \(1.5292^n\) maximal irredundant sets.
For the entire collection see [Zbl 1374.68006].Graph isomorphisms in quasi-polynomial time [after Babai and Luks, Weisfeiler-Leman,\ldots]https://www.zbmath.org/1483.051812022-05-16T20:40:13.078697Z"Helfgott, Harald Andrés"https://www.zbmath.org/authors/?q=ai:helfgott.harald-andresSummary: Let \(\Gamma_1,\gamma_2\) be two graphs with \(n\) vertices. Is there a bijection from the set of vertices of \(\Gamma_1\) to that of \(\Gamma_2\) sending \(\Gamma_1\) to \(\Gamma_2\)? If such bijections exist, they form a coset \(H\cdot\pi\) in the symmetric group on \(n\) elements. How can one find a representative \(\pi\) and a set of generators for \(H\)? Finding an algorithm that answers such questions efficiently (in all cases) is a challenge that has long remained open. \textit{L. Babai} [in: Proceedings of the 48th annual ACM SIGACT symposium on theory of computing, STOC '16, Cambridge, MA, USA, June 19--21, 2016. New York, NY: Association for Computing Machinery (ACM). 684--697 (2016; Zbl 1376.68058)] has recently shown how to solve these problems and related ones in quasipolynomial time, i.e., time \(O(\exp((\log n)C))\), where \(C\) is a constant. His strategy is based in part on an algorithm due to \textit{E. M. Luks} [J. Comput. Syst. Sci. 25, 42--65 (1982; Zbl 0493.68064)], who solved the case of graphs of bounded degree.
For the entire collection see [Zbl 1416.00029].The minimum conflict-free row split problem revisitedhttps://www.zbmath.org/1483.051822022-05-16T20:40:13.078697Z"Hujdurović, Ademir"https://www.zbmath.org/authors/?q=ai:hujdurovic.ademir"Husić, Edin"https://www.zbmath.org/authors/?q=ai:husic.edin"Milanič, Martin"https://www.zbmath.org/authors/?q=ai:milanic.martin"Rizzi, Romeo"https://www.zbmath.org/authors/?q=ai:rizzi.romeo"Tomescu, Alexandru I."https://www.zbmath.org/authors/?q=ai:tomescu.alexandru-ioanSummary: Motivated by applications in cancer genomics and following the work of I. Hajirasouliha and B. J. Raphael [Lect. Notes Comput. Sci. 8701, 354--367 (2014; Zbl 1367.92082)], A. Hujdurović et al. [``Complexity and algorithms for finding a perfect phylogeny from mixed tumor samples'', IEEE/ACM Trans. Comput. Biol. Bioinf. 15, No. 1, 96--108 (2018; \url{doi:10.1109/TCBB.2016.2606620})] introduced the minimum conflict-free row split (MCRS) problem: split each row of a given binary matrix into a bitwise OR of a set of rows so that the resulting matrix corresponds to a perfect phylogeny and has the minimum number of rows among all matrices with this property. Hajirasouliha and Raphael [loc. cit.] also proposed the study of a similar problem, referred to as the Minimum Distinct Conflict-Free Row Split (MDCRS) problem, in which the task is to minimize the number of distinct rows of the resulting matrix. Hujdurović et al. [loc. cit.] proved that both problems are NP-hard, gave a related characterization of transitively orientable graphs, and proposed a polynomial-time heuristic algorithm for the MCRS problem based on coloring cocomparability graphs.
We give new formulations of the two problems, showing that the problems are equivalent to two optimization problems on branchings in a derived directed acyclic graph. Building on these formulations, we obtain new results on the two problems, including: (i) a strengthening of the heuristic by Hujdurović et al. [loc. cit.] via a new min-max result in digraphs generalizing Dilworth's theorem, (ii) APX-hardness results for both problems, (iii) two approximation algorithms for the MCRS problem, and (iv) a 2-approximation algorithm for the MDCRS problem.
For the entire collection see [Zbl 1374.68006].Drawing planar graphs with few geometric primitiveshttps://www.zbmath.org/1483.051832022-05-16T20:40:13.078697Z"Hültenschmidt, Gregor"https://www.zbmath.org/authors/?q=ai:hultenschmidt.gregor"Kindermann, Philipp"https://www.zbmath.org/authors/?q=ai:kindermann.philipp"Meulemans, Wouter"https://www.zbmath.org/authors/?q=ai:meulemans.wouter"Schulz, André"https://www.zbmath.org/authors/?q=ai:schulz.andreSummary: We define the visual complexity of a plane graph drawing to be the number of basic geometric objects needed to represent all its edges. In particular, one object may represent multiple edges (e.g., one needs only one line segment to draw two collinear edges of the same vertex). Let \(n\) denote the number of vertices of a graph. We show that trees can be drawn with \(3n/4\) straight-line segments on a polynomial grid, and with \(n/2\) straight-line segments on a quasi-polynomial grid. Further, we present an algorithm for drawing planar 3-trees with \((8n\,-\,17)/3\) segments on an \(O(n)\,\times \,O(n^2)\) grid. This algorithm can also be used with a small modification to draw maximal outerplanar graphs with \(3n/2\) edges on an \(O(n)\,\times \,O(n^2)\) grid. We also study the problem of drawing maximal planar graphs with circular arcs and provide an algorithm to draw such graphs using only \((5n\,-\,11)/3\) arcs. This provides a significant improvement over the lower bound of \(2n\) for line segments for a nontrivial graph class.
For the entire collection see [Zbl 1374.68006].Mixed Dominating Set: a parameterized perspectivehttps://www.zbmath.org/1483.051842022-05-16T20:40:13.078697Z"Jain, Pallavi"https://www.zbmath.org/authors/?q=ai:jain.pallavi"Jayakrishnan, M."https://www.zbmath.org/authors/?q=ai:jayakrishnan.m-p"Panolan, Fahad"https://www.zbmath.org/authors/?q=ai:panolan.fahad"Sahu, Abhishek"https://www.zbmath.org/authors/?q=ai:sahu.abhishekSummary: In the Mixed Dominating Set (MDS) problem, we are given an \(n\)-vertex graph \(G\) and a positive integer \(k\), and the objective is to decide whether there exists a set \(S \subseteq V(G) \cup E(G)\) of cardinality at most \(k\) such that every element \(x\in (V(G)\cup E(G))\setminus S\) is either adjacent to or incident with an element of \(S\). We show that MDS can be solved in time \({7.465^k n^{\mathcal {O}(1)}} \) on general graphs, and in time \(2^{\mathcal {O}(\sqrt{k})} n^{\mathcal {O}(1)}\) on planar graphs. We complement this result by showing that MDS does not admit an algorithm with running time \(2^{o(k)} n^{\mathcal {O}(1)}\) unless the Exponential Time Hypothesis (ETH) fails, and that it does not admit a polynomial kernel unless \(\mathsf{coNP} \subseteq \mathsf{NP/poly}\). In addition, we provide an algorithm which, given a graph \(G\) together with a tree decomposition of width \(\mathsf{tw}\), solves MDS in time \(6^{\mathsf{tw}} n^{\mathcal {O}(1)}\). We finally show that unless the Set Cover Conjecture (SeCoCo) fails, MDS does not admit an algorithm with running time \(\mathcal{O}((2-\varepsilon)^{\mathsf{tw}(G)} n^{\mathcal{O}(1)})\) for any \(\epsilon >0\), where \(\mathsf{tw}(G)\) is the tree-width of \(G\).
For the entire collection see [Zbl 1374.68006].On strongly chordal graphs that are not leaf powershttps://www.zbmath.org/1483.051862022-05-16T20:40:13.078697Z"Lafond, Manuel"https://www.zbmath.org/authors/?q=ai:lafond.manuelSummary: A common task in phylogenetics is to find an evolutionary tree representing proximity relationships between species. This motivates the notion of leaf powers: a graph \(G=(V,E)\) is a leaf power if there exist a tree \(T\) on leafset \(V\) and a threshold \(k\) such that \(uv \in E\) if and only if the distance between \(u\) and \(v\) in \(T\) is at most \(k\). Characterizing leaf powers is a challenging open problem, along with determining the complexity of their recognition. Leaf powers are known to be strongly chordal, but few strongly chordal graphs are known to not be leaf powers, as such graphs are difficult to construct. Recently, Nevries and Rosenke asked if leaf powers could be characterized by strong chordality and a finite set of forbidden induced subgraphs.
In this paper, we provide a negative answer to this question, by exhibiting an infinite family \(\mathcal {G}\) of (minimal) strongly chordal graphs that are not leaf powers. During the process, we establish a connection between leaf powers, alternating cycles and quartet compatibility. We also show that deciding if a chordal graph is \(\mathcal {G}\)-free is \(\mathsf {NP}\)-complete.
For the entire collection see [Zbl 1374.68006].New results on weighted independent dominationhttps://www.zbmath.org/1483.051872022-05-16T20:40:13.078697Z"Lozin, Vadim"https://www.zbmath.org/authors/?q=ai:lozin.vadim-v"Malyshev, Dmitriy"https://www.zbmath.org/authors/?q=ai:malyshev.dmitry-s"Mosca, Raffaele"https://www.zbmath.org/authors/?q=ai:mosca.raffaele"Zamaraev, Viktor"https://www.zbmath.org/authors/?q=ai:zamaraev.victor-aSummary: Weighted independent domination is an \(\mathsf {NP}\)-hard graph problem, which remains computationally intractable in many restricted graph classes. Only few examples of classes are available, where the problem admits polynomial-time solutions. In the present paper, we extend the short list of such classes with two new examples.
For the entire collection see [Zbl 1374.68006].Relations between matroids and concept latticeshttps://www.zbmath.org/1483.060102022-05-16T20:40:13.078697Z"Mao, Hua"https://www.zbmath.org/authors/?q=ai:mao.hua(no abstract)On the insertion of \(n\)-powershttps://www.zbmath.org/1483.060172022-05-16T20:40:13.078697Z"Almeida, Jorge"https://www.zbmath.org/authors/?q=ai:almeida.jorge"Klíma, Ondřej"https://www.zbmath.org/authors/?q=ai:klima.ondrejSummary: In algebraic terms, the insertion of \(n\)-powers in words may be modelled at the language level by considering the pseudovariety of ordered monoids defined by the inequality \(1\le x^n\). We compare this pseudovariety with several other natural pseudovarieties of ordered monoids and of monoids associated with the Burnside pseudovariety of groups defined by the identity \(x^n=1\). In particular, we are interested in determining the pseudovariety of monoids that it generates, which can be viewed as the problem of determining the Boolean closure of the class of regular languages closed under \(n\)-power insertions. We exhibit a simple upper bound and show that it satisfies all pseudoidentities which are provable from \(1\le x^n\) in which both sides are regular elements with respect to the upper bound.Machine-learning the Sato-Tate conjecturehttps://www.zbmath.org/1483.111332022-05-16T20:40:13.078697Z"He, Yang-Hui"https://www.zbmath.org/authors/?q=ai:he.yang-hui"Lee, Kyu-Hwan"https://www.zbmath.org/authors/?q=ai:lee.kyu-hwan"Oliver, Thomas"https://www.zbmath.org/authors/?q=ai:oliver.thomasIn this paper, some of the latest techniques from machine-learning to the arithmetic of hyperelliptic curves are applied. More precisely, it is observed that a Bayes classifier can be trained to distinguish the Sato-Tate groups for genus 2 curves efficiently with high precision, given a small number of Euler factors for the \(L\)-function. These results are consistent with the Sato-Tate conjecture for curves of low genus, and provide evidence that machine-learning can be used to classify curves according to their Sato-Tate groups. Note that the approach of using Euler factors is in accordance with the setup of the Langlands program. Further, the observations of the paper are verified using known results from the literature and the data available in the LMFDB.
Reviewer: Dimitros Poulakis (Thessaloniki)On rational and hypergeometric solutions of linear ordinary difference equations in \(\Pi\Sigma^\ast\)-field extensionshttps://www.zbmath.org/1483.120052022-05-16T20:40:13.078697Z"Abramov, Sergei A."https://www.zbmath.org/authors/?q=ai:abramov.sergei-a"Bronstein, Manuel"https://www.zbmath.org/authors/?q=ai:bronstein.manuel-eric"Petkovšek, Marko"https://www.zbmath.org/authors/?q=ai:petkovsek.marko"Schneider, Carsten"https://www.zbmath.org/authors/?q=ai:schneider.carsten.1|schneider.carstenIn this paper the authors develop an algorithmic framework to solve linear difference equations in the context of \(\Pi\Sigma^*\)-fields. That is, the coefficients of these equations are assumed to be indefinitely nested sums and products that can be represented as elements of a \(\Pi\Sigma^*\)-field. The two main results of this paper are algorithms for computing (1) hypergeometric solutions of homogeneous linear difference equations, and (2) rational solutions of parameterized (inhomogeneous) linear difference equations. These algorithms are therefore generalizations of classical algorithms (Abramov, Petkovšek, etc.). They can also be viewed as the difference analog of Singer's algorithm, based on Risch's algorithm, for finding all Liouvillian solutions of linear differential equations whose coefficients are given in terms of Liouvillian extensions.
Reviewer: Christoph Koutschan (Linz)A package for computations with sparse resultantshttps://www.zbmath.org/1483.130052022-05-16T20:40:13.078697Z"Staglianò, Giovanni"https://www.zbmath.org/authors/?q=ai:stagliano.giovanniSummary: We introduce the \textit{Macaulay2} package \textit{SparseResultants}, which provides general tools for computing sparse resultants, sparse discriminants, and hyperdeterminants. We give some background on the theory and briefly show how the package works.Derandomization and absolute reconstruction for sums of powers of linear formshttps://www.zbmath.org/1483.130442022-05-16T20:40:13.078697Z"Koiran, Pascal"https://www.zbmath.org/authors/?q=ai:koiran.pascal"Skomra, Mateusz"https://www.zbmath.org/authors/?q=ai:skomra.mateuszIn this paper, given a homogeneous polynomial \(f (x_1,\dots, x_n)\) of degree 3, the problem to decide whether it can be written as a sum of cubes of linearly independent linear forms is studied. In other words, an algorithm is given to check if, modulo a linear change of coordinates, we can re-write \(f (A(x_1, \dots, x_n)) = f (y_1,\dots, y_n) = a_1y_1^3 + a_2y_2^3 +\dots+ a_ny_n^3\), where \(\mathbf{\underline{y}} = A\mathbf{\underline{x}}\) is a linear change of coordinates and each \(a_i\in \{0, 1\}\).
The given algorithm works over \(\mathbb{C}\) and is algebraic, i.e., it performs only arithmetic operations and equality tests on the coefficients of the input polynomial \(f\) (also inequalities if working over \(\mathbb{R}\)); moreover the algorithm runs in polynomial time if \(f \in \mathbb{Q}[x_1, \dots, x_n]\) and it does not make any kind of polynomial factorization.
The algorithm can run in polynomial time because it just answers the question whether \(f\) can or cannot be decomposed in a sum of \(r \leq n\) sum of powers of linearly independent linear forms; it may not give the linear forms in question, whose coefficients may not be hard computable with arithmetic operations from those of \(f\) (examples are given).
The algorithms is obtained by viewing the input polynomial \(f\) as symmetric tensor \(T\) of size \(n\) and order 3 and considering the 3 ``slices'' of \(T\) (i.e. symmetric \(n\times n\) matrices) and to find a simultaneous diagonalization of them; this procedure uses the factorization properties of the Hessian determinant of \(f\).
Reviewer: Alessandro Gimigliano (Bologna)A Gröbner basis algorithm for ideals over zero-dimensional valuation ringshttps://www.zbmath.org/1483.130452022-05-16T20:40:13.078697Z"Li, Dongmei"https://www.zbmath.org/authors/?q=ai:li.dongmei"Liu, Jinwang"https://www.zbmath.org/authors/?q=ai:liu.jinwangSummary: Zero-dimensional valuation rings are one kind of non-Noetherian rings. This paper investigates properties of zero-dimensional valuation rings and prove that a finitely generated ideal over such a ring has a Gröbner basis. The authors present an algorithm for computing a Gröbner basis of a finitely generated ideal over it. Furthermore, an interesting example is also provided to explain the algorithm.Computing PUR of zero-dimensional ideals of breadth at most onehttps://www.zbmath.org/1483.130462022-05-16T20:40:13.078697Z"Pan, Jian"https://www.zbmath.org/authors/?q=ai:pan.jian"Shang, Baoxin"https://www.zbmath.org/authors/?q=ai:shang.baoxin"Li, Zhe"https://www.zbmath.org/authors/?q=ai:li.zhe"Zhang, Shugong"https://www.zbmath.org/authors/?q=ai:zhang.shugongSummary: In this paper, for a zero-dimensional polynomial ideal \(I\), the authors prove that \(k[x_1, x_2,\cdots,x_n]/I\) is cyclic if and only if the breadth of \(I\) is 0 or 1. Furthermore, the authors present a new algorithm to compute polynomial univariate representation (PUR) of such an ideal.An improvement for GVWhttps://www.zbmath.org/1483.130472022-05-16T20:40:13.078697Z"Zheng, Licui"https://www.zbmath.org/authors/?q=ai:zheng.licui"Li, Dongmei"https://www.zbmath.org/authors/?q=ai:li.dongmei"Liu, Jinwang"https://www.zbmath.org/authors/?q=ai:liu.jinwangSummary: \textit{S. Gao} et al. [Math. Comput. 85, No. 297, 449--465 (2016; Zbl 1331.13018)] gave a simple algorithm to compute Gröbner bases named GVW. It can be used to compute Gröbner bases for both ideals and syzygies at the same time, and the latter plays an important role in free resolutions in homological algebra. In GVW algorithms the authors need to compute all the J-pairs firstly and then use GVW criterion (which refers the criterions used in GVW) to determine which one is useless or which one the authors should do top-reduction. In this paper, based on the study of relations between J-pairs, the authors propose the concept of factor. This concept allows the authors to filter the useless J-pairs in a rather convenient way. Moreover, by using this concept, the authors may easily determine which two pairs' J-pair need not to be computed. Besides, the Gröbner basis which the authors obtained is relatively simpler than the one in GVW.An improvement of the rational representation for high-dimensional systemshttps://www.zbmath.org/1483.130482022-05-16T20:40:13.078697Z"Xiao, Fanghui"https://www.zbmath.org/authors/?q=ai:xiao.fanghui"Lu, Dong"https://www.zbmath.org/authors/?q=ai:lu.dong"Ma, Xiaodong"https://www.zbmath.org/authors/?q=ai:ma.xiaodong"Wang, Dingkang"https://www.zbmath.org/authors/?q=ai:wang.dingkangSummary: Based on the rational univariate representation of zero-dimensional polynomial systems, Tan and Zhang proposed the rational representation theory for solving a high-dimensional polynomial system, which uses so-called rational representation sets to describe all the zeros of a high-dimensional polynomial system. This paper is devoted to giving an improvement for the rational representation. The idea of this improvement comes from a minimal Dickson basis used for computing a comprehensive Gröbner system of a parametric polynomial system to reduce the number of branches. The authors replace the normal Grobner basis \(G\) satisfying certain conditions in the original algorithm (Tan-Zhang's algorithm) with a minimal Dickson basis \(G_m\) of a Grobner basis for the ideal, where \(G_m\) is smaller in size than \(G\). Based on this, the authors give an improved algorithm. Moreover, the proposed algorithm has been implemented on the computer algebra system Maple. Experimental data and its performance comparison with the original algorithm show that it generates fewer branches and the improvement is rewarding.A note on the boundary of the Birkhoff-James \(\epsilon\)-orthogonality setshttps://www.zbmath.org/1483.150122022-05-16T20:40:13.078697Z"Katsouleas, Georgios"https://www.zbmath.org/authors/?q=ai:katsouleas.georgios"Panagakou, Vasiliki"https://www.zbmath.org/authors/?q=ai:panagakou.vasiliki"Psarrakos, Panayiotis"https://www.zbmath.org/authors/?q=ai:psarrakos.panayiotis-jThe numerical range of a matrix polynomial \(P\) is a generalization of the numerical range (field of values) of a matrix \(A\). The classical case is obtained for \(P(z) = zI - A\). Recently, the notions of Birkhoff-James \(\varepsilon\)-orthogonality sets of rectangular matrices and elements of a complex normed linear space have been introduced as natural generalizations of the standard numerical range. In this paper, a characterization for the corners of the Birkhoff-James-\(\varepsilon\)-sets of vectors and vector-valued polynomials is obtained. A randomized algorithm for approximating their boundaries is also proposed.
Reviewer: Cătălin Badea (Villeneuve d'Ascq)Computing higher Leray-Serre spectral sequences of towers of fibrationshttps://www.zbmath.org/1483.180272022-05-16T20:40:13.078697Z"Guidolin, Andrea"https://www.zbmath.org/authors/?q=ai:guidolin.andrea"Romero, Ana"https://www.zbmath.org/authors/?q=ai:romero.ana-armasSpectral systems and higher spectral sequences are generalizations of spectral sequences due to [\textit{B.\ Matschke}, ``Successive spectral sequences'', Preprint, \url{arXiv:1308.3187}; ``Higher spectral sequences'', Preprint, \url{arXiv:2107.02130}] that arise from filtrations by arbitrary posets or multiple but compatible filtrations of a single chain complex. As this extra generality comes at the cost of extra bookkeeping, algorithms for computations with higher spectral sequences are a worthwhile target.
In the article under review, the authors continue their investigation of algorithms for higher spectral sequences begun in [\textit{A. Guidolin} and \textit{A. Romero}, in: Proceedings of the 43rd international symposium on symbolic and algebraic computation, ISSAC 2018, New York, NY, USA, July 16--19, 2018. New York, NY: Association for Computing Machinery (ACM). 183--190 (2018; Zbl 1467.55011)] and devise algorithms for computing with Matschke's higher Leray-Serre spectral sequence associated with a tower of fibrations.
From the abstract: ``In this work, we present algorithms to compute higher Leray-Serre spectral sequences leveraging the effective homology technique, which allows to perform computations involving chain complexes of infinite type associated with interesting objects in algebraic topology. In order to develop the programs, implemented as a new module for the Computer Algebra system Kenzo, we translated the original construction of the higher Leray-Serre spectral sequence in a simplicial framework and studied some of its fundamental properties.''
Reviewer: Tobias Columbus (Paderborn)The lamplighter group of rank two generated by a bireversible automatonhttps://www.zbmath.org/1483.200492022-05-16T20:40:13.078697Z"Ahmed, Elsayed"https://www.zbmath.org/authors/?q=ai:ahmed.elsayed-a"Savchuk, Dmytro"https://www.zbmath.org/authors/?q=ai:savchuk.dmytro-mConsider a deterministic, complete, finite state automaton \(\mathcal A\) on an alphabet \(\Sigma\) with letter-to-letter transducer. If one fixes an initial state, one obtains a length preserving transformation on the set \(\Sigma^*\) of words on the alphabet \(\Sigma\), which can be viewed as a \(|\Sigma|\)-regular rooted tree \(\mathcal T_\Sigma\).
Varying the initial state, one gets finitely many such transformations. When they are all bijective, there is a canonically defined automaton \(\mathcal A^{-1}\) whose associated transformations are the inverses of the transformations associated to \(\mathcal A\). The group of transformations of \(\Sigma^*\) they generate is called an automaton group. Grigorchuk's group of intermediate growth is a prominent example of such a group.
The authors construct an explicit 4-state 2-letter automaton \(\mathcal A\) such that the four associated transformations of the regular rooted binary tree generate a lamplighter group \((\mathbb{Z}_2^2)\wr \mathbb{Z}\) of rank two. Moreover, the automaton is \emph{bi-reversible}: this means that the dual automata (obtained by reversing the roles of the set of states and of the alphabet) of \(\mathcal A\) and \(\mathcal A^{-1}\) are both invertible (i.e.\ induce bijections of the associated rooted trees).
Additionally, using the natural identification of the boundary of \(\mathcal T_\Sigma\) with the ring of formal series \(\mathbb Z_2[[t]]\), the authors show that the automaton group is actually contained in its group of affine transformations \(\mathbb Z_2[[t]]^{\times}\ltimes \mathbb Z_2[[t]]\).
Reviewer: Vincent Guirardel (Rennes)A neural network closure for the Euler-Poisson system based on kinetic simulationshttps://www.zbmath.org/1483.351582022-05-16T20:40:13.078697Z"Bois, Léo"https://www.zbmath.org/authors/?q=ai:bois.leo"Franck, Emmanuel"https://www.zbmath.org/authors/?q=ai:franck.emmanuel"Navoret, Laurent"https://www.zbmath.org/authors/?q=ai:navoret.laurent"Vigon, Vincent"https://www.zbmath.org/authors/?q=ai:vigon.vincentSummary: This work deals with the modeling of plasmas, which are ionized gases. Thanks to machine learning, we construct a closure for the one-dimensional Euler-Poisson system valid for a wide range of collisional regimes. This closure, based on a fully convolutional neural network called V-net, takes as input the whole spatial density, mean velocity and temperature and predicts as output the whole heat flux. It is learned from data coming from kinetic simulations of the Vlasov-Poisson equations. Data generation and preprocessings are designed to ensure an almost uniform accuracy over the chosen range of Knudsen numbers (which parametrize collisional regimes). Finally, several numerical tests are carried out to assess validity and flexibility of the whole pipeline.Matrix solitons solutions of the modified Korteweg-de Vries equationhttps://www.zbmath.org/1483.351822022-05-16T20:40:13.078697Z"Carillo, Sandra"https://www.zbmath.org/authors/?q=ai:carillo.sandra"Lo Schiavo, Mauro"https://www.zbmath.org/authors/?q=ai:lo-schiavo.mauro"Schiebold, Cornelia"https://www.zbmath.org/authors/?q=ai:schiebold.corneliaSummary: Nonlinear non-abelian Korteweg-de Vries (KdV) and modified Korteweg-de Vries (mKdV) equations and their links via Bäcklund transformations are considered. The focus is on the construction of soliton solutions admitted by matrix modified Korteweg-de Vries equation. Matrix equations can be viewed as a specialisation of operator equations in the finite dimensional case when operators admit a matrix representation. Bäcklund transformations allow to reveal structural properties [the first and third authors, J. Math. Phys. 50, No. 7, 073510, 14 p. (2009; Zbl 1256.37036)] enjoyed by non-commutative KdV-type equations, such as the existence of a recursion operator. Operator methods combined with Bäcklund transformations allow to construct explicit solution formulae [the first and third authors, ibid. 52, No. 5, 053507, 21 p. (2011; Zbl 1317.35220)]. The latter are adapted to obtain solutions admitted by the \(2 \times 2\) and \(3 \times 3\) matrix mKdV equation. Some of these matrix solutions are visualised to show the solitonic behaviour they exhibit. A further key tool used to obtain the presented results is an ad hoc construction of computer algebra routines to implement non-commutative computations.
For the entire collection see [Zbl 1464.70003].Data-driven vector soliton solutions of coupled nonlinear Schrödinger equation using a deep learning algorithmhttps://www.zbmath.org/1483.352172022-05-16T20:40:13.078697Z"Mo, Yifan"https://www.zbmath.org/authors/?q=ai:mo.yifan"Ling, Liming"https://www.zbmath.org/authors/?q=ai:ling.liming"Zeng, Delu"https://www.zbmath.org/authors/?q=ai:zeng.deluThe aim of the paper is to investigate data-driven vector solitons for the coupled nonlinear Schrödinger equation (CNLSE) by means of an improved physics-informed neural networks (PINN) method. One considers an initial and boundary Dirichlet problem for the CNLSE:
\[
iq_t + \frac{1}{2}q_{xx}+{\|q\|} _2^2q =0,\text{ for }x\in [- x_L,x_L], t\in[t_0,t_T],
\]
\[
q(x,t_0)=q^{t_0}(x),\text{ for }x\in [- x_L,x_L],
\]
\[
q(x_L,t)=q(-x_L,t)=q^b(t),\text{ for }t\in[t_0,t_T]
\]
Here, $q(x,t)=(q_1(x,t),q_2(x,t))$ is a complex vector field. The original PINN method for solving CNLSE is briefly reviewed and in order to improve the convergence rate and the approximation ability of the original PINN method a new pre-fixed multistage training algorithm is proposed. In the second section of the article the construction of an improved PINN by pre-fixed multi-stage training is largely described, the flow diagram together with the pre-fixed multi-stage training algorithm are shown. Few words on the efficiency and accuracy of the method end the section. In the third section of the article the performance of the improved method is investigated. One considers the case of a nondegenerate vector solution soliton and the collision of vector soliton solution. References contain 65 titles.
Reviewer: Claudia Simionescu-Badea (Wien)Chaos and ergodicity are decidable for linear cellular automata over \((\mathbb{Z}/m\mathbb{Z})^n\)https://www.zbmath.org/1483.370222022-05-16T20:40:13.078697Z"Dennunzio, Alberto"https://www.zbmath.org/authors/?q=ai:dennunzio.alberto"Formenti, Enrico"https://www.zbmath.org/authors/?q=ai:formenti.enrico"Grinberg, Darij"https://www.zbmath.org/authors/?q=ai:grinberg.darij"Margara, Luciano"https://www.zbmath.org/authors/?q=ai:margara.lucianoSummary: We prove that important properties describing complex behaviours as ergodicity, chaos, topological transitivity, and topological mixing, are decidable for one-dimensional linear cellular automata (LCA) over \((\mathbb{Z}/m\mathbb{Z})^n\) (Theorem 6 and Corollary 7), a large and important class of cellular automata (CA) which are able to exhibit the complex behaviours of general CA and are used in applications. In particular, we provide a decidable characterization of ergodicity, which is known to be equivalent to all the above mentioned properties, in terms of the characteristic polynomial of the matrix associated with LCA. We stress that the setting of LCA over \((\mathbb{Z}/m\mathbb{Z})^n\) with \(n > 1\) is more expressive, gives rise to much more complex dynamics, and is more difficult to deal with than the already investigated case \(n = 1\). The proof techniques from [\textit{M. Itô} et al., J. Comput. Syst. Sci. 27, 125--140 (1983; Zbl 0566.68047); \textit{G. Manzini} and \textit{L. Margara}, Theor. Comput. Sci. 221, No. 1--2, 157--177 (1999; Zbl 0930.68090)] used when \(n = 1\) for obtaining decidable characterizations of dynamical and ergodic properties can no longer be exploited when \(n > 1\) for achieving the same goal. Indeed, in order to get the decision algorithm (Algorithm 1) we need to prove a non trivial result of abstract algebra (Theorem 5) which is also of interest in its own.
We also illustrate the impact of our results in real-world applications concerning the important and growing domain of cryptosystems which are often based on one-dimensional LCA over \((\mathbb{Z}/m\mathbb{Z})^n\) with \(n > 1\). As a matter of facts, since cryptosystems have to satisfy the so-called confusion and diffusion properties (ensured by ergodicity and chaos, respectively, of the involved LCA) Algorithm \(^*1\) turns out to be an important tool for building chaotic/ergodic one-dimensional linear CA over \((\mathbb{Z}/m\mathbb{Z} )^n\) and, hence, for improving the existing methods based on them.A bridge between invariant dynamical structures and uncertainty quantificationhttps://www.zbmath.org/1483.371142022-05-16T20:40:13.078697Z"García-Sánchez, G."https://www.zbmath.org/authors/?q=ai:garcia-sanchez.g"Mancho, A. M."https://www.zbmath.org/authors/?q=ai:mancho.ana-maria|mancho.ana-mariia"Wiggins, S."https://www.zbmath.org/authors/?q=ai:wiggins.stephenSummary: We develop a new quantifier for forward time uncertainty for trajectories that are solutions of models generated from data sets. Our uncertainty quantifier is defined on the phase space in which the trajectories evolve and we show that it has a rich structure that is directly related to phase space structures from dynamical systems theory, such as hyperbolic trajectories and their stable and unstable manifolds. We apply our approach to an ocean data set, as well as standard benchmark models from deterministic dynamical systems theory. A significant application of our results, is that they allow a quantitative comparison of the transport performance described from different ocean data sets. This is particularly interesting nowadays when a wide variety of sources are available since our methodology provides avenues for assessing the effective use of these data sets in a variety of situations.Robust wavelet-based estimation for varying coefficient dynamic models under long-dependent structureshttps://www.zbmath.org/1483.420252022-05-16T20:40:13.078697Z"Zhou, Xingcai"https://www.zbmath.org/authors/?q=ai:zhou.xingcai"Lv, Shaogao"https://www.zbmath.org/authors/?q=ai:lv.shaogaoIn data analysis, varying-coefficient models are a tool to explore the dynamic features which may exist in the data set.
In this paper the authors consider the following varying-coefficient model \[Y_i=\mathfrak{X}_i^{\top}\,{\beta}(T_i)+\epsilon_i,\quad i=1,\dots,p,\] where \(\beta(\cdot)=(\beta_1(\cdot),\dots,\beta_p(\cdot))^{\top}\) is a vector of unknown functions, \(\mathfrak{X}_i=(X_{i1},\dots,X_{ip})^{\top}\) is a vector of features, \(Y_i\) is the response, \(T_i\) is a random design point and \(\epsilon_i\) is the noise term.
Under the assumption that the design features and the noises are Short-Range Dependent and Long-Range Dependent, respectively, they consider M-type estimation based on wavelet techniques which do not require too much smoothness of the true function \(\beta(\cdot).\)
They establish consistency and asymptotic distribution of the proposed estimator under some mild conditions.
Reviewer: Sandra Saliani (Potenza)Introduction to topological data analysis and applicationshttps://www.zbmath.org/1483.550042022-05-16T20:40:13.078697Z"Ibáñez Marcelo, Esther"https://www.zbmath.org/authors/?q=ai:ibanez-marcelo.estherSummary: Topological Data Analysis (TDA) has been increasingly trending in the last ten years. Here we present an introduction based on basic concepts and definitions that allow us to describe the main tools in this recent field where greater importance is given to shape rather than to the metrics. We will see what persistent homology consists of and how it can be applied in a wide range of scientific fields, such as neuroscience, image recognition and genetics.Testing that a local optimum of the likelihood is globally optimum using reparameterized embeddings. Applications to wavefront sensinghttps://www.zbmath.org/1483.620572022-05-16T20:40:13.078697Z"LeBlanc, Joel W."https://www.zbmath.org/authors/?q=ai:leblanc.joel-w"Thelen, Brian J."https://www.zbmath.org/authors/?q=ai:thelen.brian-j"Hero, Alfred O."https://www.zbmath.org/authors/?q=ai:hero.alfred-o-iiiSummary: Many mathematical imaging problems are posed as non-convex optimization problems. When numerically tractable global optimization procedures are not available, one is often interested in testing ex post facto whether or not a locally convergent algorithm has found the globally optimal solution. When the problem is formulated in terms of maximizing the likelihood function under a statistical model for the measurements, one can construct a statistical test that a local maximum is in fact the global maximum. A one-sided test is proposed for the case that the statistical model is a member of the generalized location family of probability distributions, a condition often satisfied in imaging and other inverse problems. We propose a general method for improving the accuracy of the test by reparameterizing the likelihood function to embed its domain into a higher-dimensional parameter space. We show that the proposed global maximum testing method results in improved accuracy and reduced computation for a physically motivated joint-inverse problem arising in camera-blur estimation.Transport analysis of infinitely deep neural networkhttps://www.zbmath.org/1483.620722022-05-16T20:40:13.078697Z"Sonoda, Sho"https://www.zbmath.org/authors/?q=ai:sonoda.sho"Murata, Noboru"https://www.zbmath.org/authors/?q=ai:murata.noboruSummary: We investigated the feature map inside deep neural networks (DNNs) by tracking the transport map. We are interested in the role of depth -- why do DNNs perform better than shallow models? -- and the interpretation of DNNs -- what do intermediate layers do? Despite the rapid development in their application, DNNs remain analytically unexplained because the hidden layers are nested and the parameters are not faithful. Inspired by the integral representation of shallow NNs, which is the continuum limit of the width, or the hidden unit number, we developed the flow representation and transport analysis of DNNs. The flow representation is the continuum limit of the depth, or the hidden layer number, and it is specified by an ordinary differential equation (ODE) with a vector field. We interpret an ordinary DNN as a transport map or an Euler broken line approximation of the flow. Technically speaking, a dynamical system is a natural model for the nested feature maps. In addition, it opens a new way
to the
coordinate-free treatment of DNNs by avoiding the redundant parametrization of DNNs. Following Wasserstein geometry, we analyze a flow in three aspects: dynamical system, continuity equation, and Wasserstein gradient flow. A key finding is that we specified a series of transport maps of the denoising autoencoder (DAE), which is a cornerstone for the development of deep learning. Starting from the shallow DAE, this paper develops three topics: the transport map of the deep DAE, the equivalence between the stacked DAE and the composition of DAEs, and the development of the double continuum limit or the integral representation of the flow representation. As partial answers to the research questions, we found that deeper DAEs converge faster and the extracted features are better; in addition, a deep Gaussian DAE transports mass to decrease the Shannon entropy of the data distribution. We expect that further investigations on these questions lead to the development of an interpretable and principled alternatives
to DNNs.Non-convex projected gradient descent for generalized low-rank tensor regressionhttps://www.zbmath.org/1483.620912022-05-16T20:40:13.078697Z"Chen, Han"https://www.zbmath.org/authors/?q=ai:chen.han"Raskutti, Garvesh"https://www.zbmath.org/authors/?q=ai:raskutti.garvesh"Yuan, Ming"https://www.zbmath.org/authors/?q=ai:yuan.mingSummary: In this paper, we consider the problem of learning high-dimensional tensor regression problems with low-rank structure. One of the core challenges associated with learning high-dimensional models is computation since the underlying optimization problems are often non-convex. While convex relaxations could lead to polynomial-time algorithms they are often slow in practice. On the other hand, limited theoretical guarantees exist for non-convex methods. In this paper we provide a general framework that provides theoretical guarantees for learning high-dimensional tensor regression models under different low-rank structural assumptions using the projected gradient descent algorithm applied to a potentially non-convex constraint set \(\Theta\) in terms of its \textit{localized Gaussian width} (due to Gaussian design). We juxtapose our theoretical results for non-convex projected gradient descent algorithms with previous results on regularized convex approaches. The two main differences between the
convex and non-convex
approach are: (i) from a computational perspective whether the non-convex projection operator is computable and whether the projection has desirable contraction properties and (ii) from a statistical error bound perspective, the non-convex approach has a superior rate for a number of examples. We provide three concrete examples of low-dimensional structure which address these issues and explain the pros and cons for the non-convex and convex approaches. We supplement our theoretical results with simulations which show that, under several common settings of generalized low rank tensor regression, the projected gradient descent approach is superior both in terms of statistical error and run-time provided the step-sizes of the projected descent algorithm are suitably chosen.Accelerated alternating projections for robust principal component analysishttps://www.zbmath.org/1483.620982022-05-16T20:40:13.078697Z"Cai, Hanqin"https://www.zbmath.org/authors/?q=ai:cai.hanqin"Cai, Jian-Feng"https://www.zbmath.org/authors/?q=ai:cai.jian-feng"Wei, Ke"https://www.zbmath.org/authors/?q=ai:wei.keSummary: We study robust PCA for the fully observed setting, which is about separating a low rank matrix \(\boldsymbol{L}\) and a sparse matrix \(\boldsymbol{S}\) from their sum \(\boldsymbol{D}=\boldsymbol{L}+\boldsymbol{S}\). In this paper, a new algorithm, dubbed accelerated alternating projections, is introduced for robust PCA which significantly improves the computational efficiency of the existing alternating projections proposed in [\textit{P. Netrapalli} et al., ``Non-convex robust PCA'', in: Proceedings of the annual conference on neural information processing systems 2014, December 8--13, 2014, Montreal, Quebec, Canada. Red Hook, NY: Curran. 1107--1115 (2014), \url{https://proceedings.neurips.cc/paper/2014/hash/443cb001c138b2561a0d90720d6ce111-Abstract.html}] when updating the low rank factor. The acceleration is achieved by first projecting a matrix onto some low dimensional subspace before obtaining a new estimate of the low rank matrix via truncated SVD. Exact recovery guarantee has been established which shows linear convergence of the proposed algorithm. Empirical performance evaluations establish the advantage of our algorithm over other state-of-the-art algorithms for robust PCA.Probabilistic kernel support vector machineshttps://www.zbmath.org/1483.621022022-05-16T20:40:13.078697Z"Chen, Yongxin"https://www.zbmath.org/authors/?q=ai:chen.yongxin"Georgiou, Tryphon T."https://www.zbmath.org/authors/?q=ai:georgiou.tryphon-t"Tannenbaum, Allen R."https://www.zbmath.org/authors/?q=ai:tannenbaum.allen-rSummary: We propose a probabilistic enhancement of standard kernel support vector machines for binary classification, in order to address the case when, along with given data sets, a description of uncertainty (e.g., error bounds) may be available on each datum. In the present paper, we specifically consider Gaussian distributions to model uncertainty. Thereby, our data consist of pairs \((x_i,\Sigma_i)\), \(i\in\{1,\ldots,N\}\), along with an indicator \(y_i\in\{-1,1\}\) to declare membership in one of two categories for each pair. These pairs may be viewed to represent the mean and covariance, respectively, of random vectors \(\xi_i\) taking values in a suitable linear space (typically \(\mathbb{R}^n\)). Thus, our setting may also be viewed as a modification of support vector machines to classify distributions, albeit, at present, only Gaussian ones. We outline the formalism that allows computing suitable classifiers via a natural modification of the standard ``kernel trick''. The main contribution of this work is to point out a suitable kernel function for applying support vector techniques to the setting of uncertain data for which a detailed uncertainty description is also available (herein, ``Gaussian points'').Entropy-based estimation in classification problemshttps://www.zbmath.org/1483.621032022-05-16T20:40:13.078697Z"Dubnov, Yuriĭ A."https://www.zbmath.org/authors/?q=ai:dubnov.yury-aSummary: The problem of binary classification is considered, an algorithm for its solution is proposed, based on the method of entropy-based estimation of the decision rule parameters. A detailed description of the entropy-based estimation method and the classification algorithm is given, the advantages and disadvantages of this approach are described, the results of numerical experiments and comparisons with the traditional support vector machine for classification accuracy and degree of dependence on the training sample size are presented.On coarse graining of information and its application to pattern recognitionhttps://www.zbmath.org/1483.621052022-05-16T20:40:13.078697Z"Ghaderi, Ali"https://www.zbmath.org/authors/?q=ai:ghaderi.aliSummary: We propose a method based on finite mixture models for classifying a set of observations into number of different categories. In order to demonstrate the method, we show how the component densities for the mixture model can be derived by using the maximum entropy method in conjunction with conservation of Pythagorean means. Several examples of distributions belonging to the Pythagorean family are derived. A discussion on estimation of model parameters and the number of categories is also given.
For the entire collection see [Zbl 1470.00021].Community detection for weighted networks with unknown number of communitieshttps://www.zbmath.org/1483.621062022-05-16T20:40:13.078697Z"Liang, Hao"https://www.zbmath.org/authors/?q=ai:liang.hao"Zhang, Weiping"https://www.zbmath.org/authors/?q=ai:zhang.weiping|zhang.weiping.1Summary: In this paper, we propose a community detection approach for weighted networks by combining the infinite Gaussian mixture model and spectral method. The spectral method provides the embeddings of nodes in a low-dimensional space. Keeping the uncertainty about the number of communities, we instead adapt the recently developed Bayesian nonparametric techniques to Gaussian mixture model for clustering embeddings. It reduces the risk of erroneous clustering assignments when inferring the number of communities and nodes' memberships simultaneously. The theoretical properties about the posterior contraction rate is established under the model misspecification. The modeling framework can be easily modified to other types of networks, such as correlation networks and attributed networks. The simulation studies and benchmark data analysis show that the proposed approach outperforms other comparison methods, demonstrating the consistency of the estimator for the number of communities.On the accuracy of cross-validation in the classification problemhttps://www.zbmath.org/1483.621072022-05-16T20:40:13.078697Z"Nedel'ko, Victor M."https://www.zbmath.org/authors/?q=ai:nedelko.victor-mikhailovichSummary: In this work we will study the accuracy of the cross-validation estimates for decision functions. The main idea of the research consists in the scheme of statistical modeling that allows using real data to obtain statistical estimates, which are usually obtained only by using model (synthetic) distributions. The studies confirm the well-known empirical recommendation to choose the number of folds equal to 5 or more. The choice of more than 10 folds does not yield a significant increase in accuracy. The use of repeated cross-validation also does not provide fundamental gain in precision. The results of the experiments allow us to formulate an empirical fact that the accuracy of the estimates obtained by the cross-validation method is approximately the same as the accuracy of the estimates obtained from the test sample of half the size. This result can be easily explained by the fact that all the objects of the test sample are independent, and the estimates built by the cross-validation on different subsamples (folds) are not independent.Kernels for sequentially ordered datahttps://www.zbmath.org/1483.621442022-05-16T20:40:13.078697Z"Király, Franz J."https://www.zbmath.org/authors/?q=ai:kiraly.franz-j"Oberhauser, Harald"https://www.zbmath.org/authors/?q=ai:oberhauser.haraldSummary: We present a novel framework for learning with sequential data of any kind, such as multivariate time series, strings, or sequences of graphs. The main result is a ``sequentialization'' that transforms any kernel on a given domain into a kernel for sequences in that domain. This procedure preserves properties such as positive definiteness, the associated kernel feature map is an ordered variant of sample (cross-)moments, and this sequentialized kernel is consistent in the sense that it converges to a kernel for paths if sequences converge to paths (by discretization). Further, classical kernels for sequences arise as special cases of this method. We use dynamic programming and low-rank techniques for tensors to provide efficient algorithms to compute this sequentialized kernel.Differentially private data release via statistical election to partition sequentiallyhttps://www.zbmath.org/1483.622022022-05-16T20:40:13.078697Z"Bowen, Claire McKay"https://www.zbmath.org/authors/?q=ai:bowen.claire-mckay"Liu, Fang"https://www.zbmath.org/authors/?q=ai:liu.fang.1|liu.fang"Su, Bingyue"https://www.zbmath.org/authors/?q=ai:su.bingyueSummary: Differential Privacy (DP) formalizes privacy in mathematical terms and provides a robust concept for privacy protection. DIfferentially Private Data Synthesis (DIPS) techniques produce and release synthetic individual-level data in the DP framework. One key challenge to develop DIPS methods is the preservation of the statistical utility of synthetic data, especially in high-dimensional settings. We propose a new DIPS approach, STatistical Election to Partition Sequentially (STEPS) that partitions data by attributes according to their importance ranks according to either a practical or statistical importance measure. STEPS aims to achieve better original information preservation for the attributes with higher importance ranks and produce thus more useful synthetic data overall. We present an algorithm to implement the STEPS procedure and employ the privacy budget composability to ensure the overall privacy cost is controlled at the pre-specified value. We apply the STEPS procedure to both simulated data and the 2000--2012 Current Population Survey youth voter data. The results suggest STEPS can better preserve the population-level information and the original information for some analyses compared to PrivBayes, a modified Uniform histogram approach, and the flat Laplace sanitizer.Monte Carlo methodshttps://www.zbmath.org/1483.650012022-05-16T20:40:13.078697Z"Barbu, Adrian"https://www.zbmath.org/authors/?q=ai:barbu.adrian"Zhu, Song-Chun"https://www.zbmath.org/authors/?q=ai:zhu.song-chunThis monograph, stemming from a series of courses given by the authors during the past ten years, is intended to be a textbook for graduate students in statistics, computer science and engineering. It covers a very broad range of topics developed in the field of Monte Carlo simulations, as is illustrated best by its contents:
\begin{itemize}
\item Introduction to Monte Carlo
\item Sequential Monte Carlo
\item MCMC: the basics
\item Metropolis method and variants
\item Gibbs sampler and variants
\item Data driven MCMC
\item Cluster sampling methods
\item Data driven MCMC
\item Hamiltonian and Langevian Monte Carlo
\item Mapping the energy landscape
\end{itemize}
Basic intuitive ideas, selected algorithms and some theoretical concepts are developed. Besides that, quite many examples from the field of image analysis are included. Each chapter is finished by a rather long list of relevant references. Thus, it can be used also as a reference book by researchers in the fields of machine learning, pattern recognition, computer vision, and artificial intelligence.
Although the previous list of covered topics seems to be quite appealing, I am afraid that the authors included too much of the material without covering it enough. Potential readers should be aware that this is not a textbook where everything is developed from scratch. Indeed, it reminds me more of detailed notes for the authors which have been ``smoothed'' for the students to be more easily readable. If somebody really wants to understand the covered problems, he must go to the more detailed specialized literature.
Summarizing, it can be a useful reference to many important Monte Carlo methods. Unfortunately, I regret that in many places the authors just ``flow over the material'' and do not provide necessary deeper insight into the matter. What is maybe more dangerous is the fact that I have found quite many misprints in the book, so that if the reader does not understand the topic in detail, he can apply discussed algorithms incorrectly.
Reviewer: Jaromír Antoch (Praha)Selecting reduced models in the cross-entropy methodhttps://www.zbmath.org/1483.650112022-05-16T20:40:13.078697Z"Héas, P."https://www.zbmath.org/authors/?q=ai:heas.patrickMultilevel Monte Carlo covariance estimation for the computation of Sobol' indiceshttps://www.zbmath.org/1483.650122022-05-16T20:40:13.078697Z"Mycek, Paul"https://www.zbmath.org/authors/?q=ai:mycek.paul"De Lozzo, Matthias"https://www.zbmath.org/authors/?q=ai:de-lozzo.matthiasPhysics information aided kriging using stochastic simulation modelshttps://www.zbmath.org/1483.650192022-05-16T20:40:13.078697Z"Yang, Xiu"https://www.zbmath.org/authors/?q=ai:yang.xiu"Tartakovsky, Guzel"https://www.zbmath.org/authors/?q=ai:tartakovsky.guzel"Tartakovsky, Alexandre M."https://www.zbmath.org/authors/?q=ai:tartakovsky.alexandre-mThe gap between theory and practice in function approximation with deep neural networkshttps://www.zbmath.org/1483.650282022-05-16T20:40:13.078697Z"Adcock, Ben"https://www.zbmath.org/authors/?q=ai:adcock.ben"Dexter, Nick"https://www.zbmath.org/authors/?q=ai:dexter.nick-cSparse harmonic transforms. II: Best \(s\)-term approximation guarantees for bounded orthonormal product bases in sublinear-timehttps://www.zbmath.org/1483.650302022-05-16T20:40:13.078697Z"Choi, Bosu"https://www.zbmath.org/authors/?q=ai:choi.bosu"Iwen, Mark"https://www.zbmath.org/authors/?q=ai:iwen.mark-a"Volkmer, Toni"https://www.zbmath.org/authors/?q=ai:volkmer.toniSummary: In this paper we develop a sublinear-time compressive sensing algorithm for approximating functions of many variables which are compressible in a given Bounded Orthonormal Product Basis (BOPB). The resulting algorithm is shown to both have an associated best \(s\)-term recovery guarantee in the given BOPB, and also to work well numerically for solving sparse approximation problems involving functions contained in the span of fairly general sets of as many as \(\sim 10^{230}\) orthonormal basis functions. All code is made publicly available. As part of the proof of the main recovery guarantee new variants of the well known CoSaMP algorithm are proposed which can utilize any sufficiently accurate support identification procedure satisfying a Support Identification Property (SIP) in order to obtain strong sparse approximation guarantees. These new CoSaMP variants are then proven to have both runtime and recovery error behavior which are largely determined by the associated runtime and error behavior of the chosen support identification method. The main theoretical results of the paper are then shown by developing a sublinear-time support identification algorithm for general BOPB sets which is robust to arbitrary additive errors. Using this new support identification method to create a new CoSaMP variant then results in a new robust sublinear-time compressive sensing algorithm for BOPB-compressible functions of many variables.
For Part I see [\textit{B. Choi} et al., Found. Comput. Math. 21, No. 2, 275--329 (2021; Zbl 1472.65171)].Free-boundary conformal parameterization of point cloudshttps://www.zbmath.org/1483.650322022-05-16T20:40:13.078697Z"Choi, Gary P. T."https://www.zbmath.org/authors/?q=ai:choi.gary-pui-tung"Liu, Yechen"https://www.zbmath.org/authors/?q=ai:liu.yechen"Lui, Lok Ming"https://www.zbmath.org/authors/?q=ai:lui.lok-mingSummary: With the advancement in 3D scanning technology, there has been a surge of interest in the use of point clouds in science and engineering. To facilitate the computations and analyses of point clouds, prior works have considered parameterizing them onto some simple planar domains with a fixed boundary shape such as a unit circle or a rectangle. However, the geometry of the fixed shape may lead to some undesirable distortion in the parameterization. It is therefore more natural to consider free-boundary conformal parameterizations of point clouds, which minimize the local geometric distortion of the mapping without constraining the overall shape. In this work, we develop a free-boundary conformal parameterization method for disk-type point clouds, which involves a novel approximation scheme of the point cloud Laplacian with accumulated cotangent weights together with a special treatment at the boundary points. With the aid of the free-boundary conformal parameterization, high-quality point cloud meshing can be easily achieved. Furthermore, we show that using the idea of conformal welding in complex analysis, the point cloud conformal parameterization can be computed in a divide-and-conquer manner. Experimental results are presented to demonstrate the effectiveness of the proposed method.Solvers for systems of linear algebraic equations with block-band matriceshttps://www.zbmath.org/1483.650532022-05-16T20:40:13.078697Z"Shteĭnberg, Boris Yakovlevich"https://www.zbmath.org/authors/?q=ai:shteinberg.boris-yakovlevich"Vasilenko, Aleksandr Aleksandrovich"https://www.zbmath.org/authors/?q=ai:vasilenko.aleksandr-aleksandrovich"Veselovskiĭ, Vadim Vladimirovich"https://www.zbmath.org/authors/?q=ai:veselovskii.vadim-vladimirovich"Zhivykh, Nikita Aleksandrovich"https://www.zbmath.org/authors/?q=ai:zhivykh.nikita-aleksandrovichSummary: The article proposes methods for constructing fast solvers for systems of linear algebraic equations with block-band matrices. A data structure for efficient storage of such matrices in RAM and a fast algorithm for solving systems of linear equations with such matrices based on this structure are proposed. The article is focused on the creation of solvers based on iterative algorithms for solving systems of linear equations with both symmetric matrices and matrices having a saddle-point singularity. It is proposed to develop and use a special precompiler to speed up the solver. The experimental solver is implemented in C, and the preliminary compilation is based on the Optimizing Parallelizing System in this paper. The results of numerical experiments that demonstrate high efficiency of the developed methods, including the efficiency of the precompiler, are presented.A chord-Zhang neural network model for solving absolute value equationshttps://www.zbmath.org/1483.650702022-05-16T20:40:13.078697Z"Cui, Lu-Bin"https://www.zbmath.org/authors/?q=ai:cui.lubin"Hu, Qing"https://www.zbmath.org/authors/?q=ai:hu.qingSummary: We first present a Zhang neural network model for solving the absolute value equation. In order to avoid the changed Jacobian matrix, combined with the chord method, we fix the generalized Jacobian. Then we obtain the chord-Zhang neural network model. The convergence of the proposed models is studied. Moreover, some numerical experiments are presented to illustrate the efficiency of the proposed schemes.Matching component analysis for transfer learninghttps://www.zbmath.org/1483.650942022-05-16T20:40:13.078697Z"Clum, Charles"https://www.zbmath.org/authors/?q=ai:clum.charles"Mixon, Dustin G."https://www.zbmath.org/authors/?q=ai:mixon.dustin-g"Scarnati, Theresa"https://www.zbmath.org/authors/?q=ai:scarnati.theresaUnified acceleration of high-order algorithms under general Hölder continuityhttps://www.zbmath.org/1483.651002022-05-16T20:40:13.078697Z"Song, Chaobing"https://www.zbmath.org/authors/?q=ai:song.chaobing"Jiang, Yong"https://www.zbmath.org/authors/?q=ai:jiang.yong|jiang.yong.1"Ma, Yi"https://www.zbmath.org/authors/?q=ai:ma.yiA strongly-consistent difference scheme for 3D nonlinear Navier-Stokes equationshttps://www.zbmath.org/1483.651442022-05-16T20:40:13.078697Z"Zhang, Xiaojing"https://www.zbmath.org/authors/?q=ai:zhang.xiaojing"Chen, Yufu"https://www.zbmath.org/authors/?q=ai:chen.yufuSummary: This paper constructs a strongly-consistent explicit finite difference scheme for 3D constant viscosity incompressible Navier-Stokes equations by using of symbolic algebraic computation. The difference scheme is space second order accurate and temporal first order accurate. It is proved that difference Gröbner basis algorithm is correct. By using of difference Gröbner basis computation method, an element in Gröbner basis of difference scheme for momentum equations is a difference scheme for pressure Poisson equation. The authors find that the truncation errors expressions of difference scheme is consistent with continuous errors functions about modified version of above difference equation. The authors prove that, for strongly consistent difference scheme, each element in the difference Gröbner basis of such difference scheme always approximates a differential equation which vanishes on the analytic solutions of Navier-Stokes equations. To prove the strongly-consistency of this difference scheme, the differential Thomas decomposition theorem for nonlinear differential equations and difference Gröbner basis theorems for difference equations are applied. Numerical test certifies that strongly-consistent difference scheme is effective.Scientific visualization for the ODE-based simulator BedSim of LKABhttps://www.zbmath.org/1483.652372022-05-16T20:40:13.078697Z"Dechevsky, Lubomir"https://www.zbmath.org/authors/?q=ai:dechevsky.lubomir-t"Gundersen, Joakim"https://www.zbmath.org/authors/?q=ai:gundersen.joakimSummary: This work describes an applied-mathematic and computer-graphics project for scientific visualization of the industrial processes simulated in the testbed BedSim of the Swedish company LKAB. The various computational, geometric and simulation aspects discussed in the paper are common for the visualization of the output of any simulators of large and complex dynamical systems. One main conclusion (in Section 6) is the necessity to upgrade from ODE-system-based model to PDE-based one. The follow-up papers to this one, see Parts I--IV [Zbl 1483.65238; Zbl 1483.65239; Zbl 1483.65240; Zbl 1483.65241] describe some challenges and solutions on the way to achieving such an upgrade.Data clustering. Theory, algorithms, and applicationshttps://www.zbmath.org/1483.680012022-05-16T20:40:13.078697Z"Gan, Guojun"https://www.zbmath.org/authors/?q=ai:gan.guojun"Ma, Chaoqun"https://www.zbmath.org/authors/?q=ai:ma.chaoqun"Wu, Jianhong"https://www.zbmath.org/authors/?q=ai:wu.jianhong.1This is the second edition of a book on the principles, algorithms and applications of clustering. It covers the basic notions, then outlines a number of algorithms grouped by conceptual categories and finally provides examples and code. It is a good reference book for those taking their first steps into the field as well as for experienced researchers.
For the first edition see [Zbl 1185.68274].
Reviewer: Ruxandra Stoean (Craiova)Quantum computing for programmershttps://www.zbmath.org/1483.680022022-05-16T20:40:13.078697Z"Hundt, Robert"https://www.zbmath.org/authors/?q=ai:hundt.robertPublisher's description: This introduction to quantum computing from a classical programmer's perspective is meant for students and practitioners alike. Over 25 fundamental algorithms are explained with full mathematical derivations and classical code for simulation, using an open-source code base developed from the ground up in Python and C++. After presenting the basics of quantum computing, the author focuses on algorithms and the infrastructure to simulate them efficiently, beginning with quantum teleportation, superdense coding, and Deutsch-Jozsa. Coverage of advanced algorithms includes the quantum supremacy experiment, quantum Fourier transform, phase estimation, Shor's algorithm, Grover's algorithm with derivatives, quantum random walks, and the Solovay-Kitaev algorithm for gate approximation. Quantum simulation is explored with the variational quantum eigensolver, quantum approximate optimization, and the Max-Cut and Subset-Sum algorithms. The book also discusses issues around programmer productivity, quantum noise, error correction, and challenges for quantum programming languages, compilers, and tools, with a final section on compiler techniques for transpilation.Making images with mathematicshttps://www.zbmath.org/1483.680032022-05-16T20:40:13.078697Z"Sourin, Alexei"https://www.zbmath.org/authors/?q=ai:sourin.alexeiComputer graphics is omnipresent in these days. However, few people know how a wide solid theory stands behind it. In this book a reader can see how to apply this theory to turn geometry into an image on a computer screen. It touches all the important aspects of producing a virtual 3D world and a realistic-looking image from this artificial 3D world. The book consists of seven chapters. In the first chapter the foundations of computer graphics are presented, including the visualization pipeline. The next chapter deals with drawing lines, curves, polygons, surfaces and solid objects. In the third chapter, transformations and projections are reviewed. In the next chapter, different kinds of motion are presented, like animations, gravity-based motion and morphing. Chapter 5 is about visual appearance, this means ligting, illumination, shading and textures mapping. Chapter 6 treats the common techniques used for fast real-time rendering. Finally, in the last chapter short reviews of computer graphics tools are given, like OpenGl, PovRay and VRML.
This book is intended for computer science and engineering students and computer graphics practitioners. It can also be used by teachers as reference material for a one-semester course of computer graphics and visualization. The book is self-contained and presents the theory in a very accessible way with many visual examples.
Reviewer: Agnieszka Lisowska (Sosnowiec)Informatics and machine learning. From martingales to metaheuristicshttps://www.zbmath.org/1483.680042022-05-16T20:40:13.078697Z"Winters-Hilt, Stephen"https://www.zbmath.org/authors/?q=ai:winters-hilt.stephenPublisher's description: Informatics and Machine Learning: From Martingales to Metaheuristics delivers an interdisciplinary presentation on how analyze any data captured in digital form. The book describes how readers can conduct analyses of text, general sequential data, experimental observations over time, stock market and econometric histories, or symbolic data, like genomes. It contains large amounts of sample code to demonstrate the concepts contained within and assist with various levels of project work.
The book offers a complete presentation of the mathematical underpinnings of a wide variety of forms of data analysis and provides extensive examples of programming implementations. It is based on two decades worth of the distinguished author's teaching and industry experience.
\begin{itemize}
\item A thorough introduction to probabilistic reasoning and bioinformatics, including Python shell scripting to obtain data counts, frequencies, probabilities, and anomalous statistics, or use with Bayes' rule
\item An exploration of information entropy and statistical measures, including Shannon entropy, relative entropy, maximum entropy (maxent), and mutual information
\item A practical discussion of ad hoc, ab initio, and bootstrap signal acquisition methods, with examples from genome analytics and signal analytics
\end{itemize}
Perfect for undergraduate and graduate students in machine learning and data analytics programs, Informatics and Machine Learning: From Martingales to Metaheuristics will also earn a place in the libraries of mathematicians, engineers, computer scientists, and life scientists with an interest in those subjects.Natural language processing. A machine learning perspectivehttps://www.zbmath.org/1483.680052022-05-16T20:40:13.078697Z"Zhang, Yue"https://www.zbmath.org/authors/?q=ai:zhang.yue"Teng, Zhiyang"https://www.zbmath.org/authors/?q=ai:teng.zhiyang.1This book discusses NLP from the machine learning perspective. It begins with the general concepts of NLP and then focuses on the main techniques pertaining to machine learning from different angles: from classification to structure modelling, from unsupervised to supervised learning, from statistics to deep learning, from generative to discriminative models, from exact to approximate methods. Algorithms are provided in pseudocode and new exercises are also given. The book is a valuable tool for both beginning and advanced researchers in the field.
Reviewer: Catalin Stoean (Craiova)Non-standard semantics for graph query languageshttps://www.zbmath.org/1483.680062022-05-16T20:40:13.078697Z"Mennicke, Stephan"https://www.zbmath.org/authors/?q=ai:mennicke.stephanPublisher's description: Als effiziente Alternative zur klassischen Subgraphisomorphie haben sich Simulationen für diverse Aufgaben im Graphdatenmanagement etabliert, z.B. in Beschreibungssprachen baumstrukturierter Daten. Dieser Theorie widmen wir uns zuallererst im Hinblick auf moderne (nicht braumstrukturierte) Graphdatenbankmodelle. Nach eingehender Studie der Rolle von Wurzelknoten gelingt die Ableitung einer korrekten Semantik für Graphschemata. Außerdem erweitern wir das Modell um verpflichtende Attribute, für die wir ebenfalls eine korrekte Semantik angeben. Simulationen werden auch bzgl. ihres pragmatischen Werts für einfache Graphanfragen untersucht.
Im zweiten Teil der Arbeit komplementieren wir duale Simulationen mit klassischen Operatoren der Anfragesprache SPARQL. Leider stellt sich dies als unlösbare Aufgabe heraus, sobald man interessante Verknüpfungsoperatoren der Sprache hinzufügt. Die resultierenden Anfragesprachen sind weder korrekt noch vollständig bezüglich der Ursprungssemantik. Für Fragmente gelingt es, Vollständigkeit und sogar effiziente Lösbarkeit klassischer Anfragesprachprobleme nachzuweisen. Über mehrere Approximierungsschritte gelingt es schließlich eine vollständige SPARQL-Semantik auf Basis dualer Simulationen zu definieren. Die Semantik selbst hat die Eigenschaft, mit einem einzigen Match alle SPARQL-Resultate zusammenzufassen. Daraus entwickeln wir eine algorithmische Lösung, die als Vorverarbeitung der SPARQL-Anfragebeantwortung verwendet werden kann. Etablierte und effiziente Algorithmen zur Berechnung von Simulationen skalieren gleich schlecht mit der Datenbankgröße. Die allgemeineren Werkzeuge sind zu wenig auf übliche Annahmen über Graphdaten eingestellt. Wir analysieren solche Annahmen und entwickeln auf deren Basis einen Algorithmus, der im Vergleich zu den bestehenden Algorithmen deutlich besser skaliert. Außerdem evaluieren wir mit dem entwickelten Werkzeug unsere Pruning-Semantik für SPARQL.Theoretical aspects of computing -- ICTAC 2021. 18th international colloquium, virtual event, Nur-Sultan, Kazakhstan, September 8--10, 2021. Proceedingshttps://www.zbmath.org/1483.680072022-05-16T20:40:13.078697ZThe articles of this volume will be reviewed individually. For the preceding colloquium see [Zbl 1464.68020].
Indexed articles:
\textit{van der Aalst, Wil M. P.}, Concurrency and objects matter! Disentangling the fabric of real operational processes to create digital twins, 3-17 [Zbl 07500634]
\textit{Dix, Alan}, Qualitative-quantitative reasoning: thinking informally about formal things, 18-35 [Zbl 07500635]
\textit{Bustamante, Luis Henrique; Martins, Ana Teresa}, Some aspects of the database resilience, 39-46 [Zbl 07500636]
\textit{König, Jürgen; Wehrheim, Heike}, On the correctness problem for serializability, 47-64 [Zbl 07500637]
\textit{Erkens, Rick; Groote, Jan Friso}, A set automaton to locate all pattern matches in a term, 67-85 [Zbl 07500638]
\textit{Khan, Muhammad Hannan; Hassan, Osman; Khan, Shahid}, Accelerating SpMV multiplication in probabilistic model checkers using GPUs, 86-104 [Zbl 07500639]
\textit{Phyo, Yati; Do, Canh Minh; Ogata, Kazuhiro}, A divide \& conquer approach to conditional stable model checking, 105-111 [Zbl 07500640]
\textit{Cruz-Filipe, Luís; Montesi, Fabrizio; Peressotti, Marco}, Certifying choreography compilation, 115-133 [Zbl 07500641]
\textit{Kastenbaum, Stéphane; Boyer, Benoît; Talpin, Jean-Pierre}, A mechanically verified theory of contracts, 134-151 [Zbl 07500642]
\textit{Li, Liyi; Gunter, Elsa L.}, A complete semantics of \(\mathbb{K}\) and its translation to Isabelle, 152-171 [Zbl 07500643]
\textit{Díaz-Caro, Alejandro; Dowek, Gilles}, A new connective in natural deduction, and its application to quantum computing, 175-193 [Zbl 07500644]
\textit{Biswas, Sayan; Jung, Kangsoo; Palamidessi, Catuscia}, An incentive mechanism for trading personal data in data markets, 197-213 [Zbl 07500645]
\textit{Eisentraut, Julia; Holzer, Stephan; Klioba, Katharina; Křetínský, Jan; Pin, Lukas; Wagner, Alexander}, Assessing security of cryptocurrencies with attack-defense trees: proof of concept and future directions, 214-234 [Zbl 07500646]
\textit{Horne, Ross; Mauw, Sjouke; Yurkov, Semen}, Compositional analysis of protocol equivalence in the applied \(\pi \)-calculus using quasi-open bisimilarity, 235-255 [Zbl 07500647]
\textit{Manabe, Yoshifumi; Ono, Hibiki}, Card-based cryptographic protocols with a standard deck of cards using private operations, 256-274 [Zbl 07500648]
\textit{Prasad, Sanjiva; Yerraguntla, R. Madhukar}, Normalising Lustre preserves security, 275-292 [Zbl 07500649]
\textit{Chu, Wenjing; Chen, Shuo; Bonsangue, Marcello}, Learning probabilistic automata using residuals, 295-313 [Zbl 07500650]
\textit{Drămnesc, Isabela; Jebelean, Tudor}, \textit{AlCons }: deductive synthesis of sorting algorithms in \textit{Theorema}, 314-333 [Zbl 07500651]
\textit{Senda, Ryoma; Takata, Yoshiaki; Seki, Hiroyuki}, Reactive synthesis from visibly register pushdown automata, 334-353 [Zbl 07500652]
\textit{Hainry, Emmanuel; Jeandel, Emmanuel; Péchoux, Romain; Zeyen, Olivier}, \textsc{ComplexityParser}: an automatic tool for certifying poly-time complexity of Java programs, 357-365 [Zbl 07500653]
\textit{Miculan, Marino; Pasqua, Michele}, A calculus for attribute-based memory updates, 366-385 [Zbl 07500654]
\textit{Shiraishi, Tomoki; Kikuchi, Kentaro; Aoto, Takahito}, A proof method for local sufficient completeness of term rewriting systems, 386-404 [Zbl 07500655]Mathematics for future computing and communicationshttps://www.zbmath.org/1483.680082022-05-16T20:40:13.078697ZPublisher's description: For 80 years, mathematics has driven fundamental innovation in computing and communications. This timely book provides a panorama of some recent ideas in mathematics and how they will drive continued innovation in computing, communications and AI in the coming years. It provides a unique insight into how the new techniques that are being developed can be used to provide theoretical foundations for technological progress, just as mathematics was used in earlier times by Turing, von Neumann, Shannon and others. Edited by leading researchers in the field, chapters cover the application of new mathematics in computer architecture, software verification, quantum computing, compressed sensing, networking, Bayesian inference, machine learning, reinforcement learning and many other areas.
The articles of this volume will be reviewed individually.
Indexed articles:
\textit{McColl, Bill}, Mathematics, models and architectures, 6-53 [Zbl 07481171]
\textit{Chen, Haibo; Gao, Xin}, Mathematics and software verification, 54-73 [Zbl 07481172]
\textit{Kong, Yunchuan}, Mathematics for quantum computing, 74-97 [Zbl 07481173]
\textit{Bennequin, Daniel; Belfiore, Jean-Claude}, Mathematics for AI: categories, toposes, types, 98-132 [Zbl 07481174]
\textit{Zhang, Rui; Long, Zichao}, Mathematics and compressed sensing, 138-152 [Zbl 07481176]
\textit{Debbah, Mérouane}, Mathematics, information theory, and statistical physics, 153-186 [Zbl 07481177]
\textit{Li, Zongpeng; Miao Lihua; Tang, Siyu}, Mathematics of data networking, 187-210 [Zbl 07481178]
\textit{Sun, Jie}, Mathematics and network science, 211-224 [Zbl 07481179]
\textit{Tong, Wen; Ge, Yiqun}, Mathematics, information and learning, 230-284 [Zbl 07481181]
\textit{Guo, Kaiyang; Lv, Wenlong; Zhang, Jianfeng}, Mathematics and Bayesian inference, 285-308 [Zbl 07481182]
\textit{Jui, Shang-Ling}, Mathematics, optimization and machine learning, 309-328 [Zbl 07481183]
\textit{Wu, Shuang; Wang, Jun}, Mathematics of reinforcement learning, 329-374 [Zbl 07481184]
\textit{Dang, Wenshuan}, Mathematics and prospects for future breakthroughs, 377-383 [Zbl 07481186]Logics for new-generation AI 2021. First international workshop, LNGAI 2021, June, 18--20 2021, Hangzhou, Chinahttps://www.zbmath.org/1483.680092022-05-16T20:40:13.078697ZPublisher's description: With the development of several new directions of AI, including explainable AI, ethical AI and knowledge-based AI, the corresponding directions of logical research are gaining momentum: causal reasoning, norms and value reasoning, and knowledge graph reasoning. In an open and dynamic environment, the main challenges for modeling such kinds of reasoning are to deal with information that is typically incomplete, uncertain, dynamic and conflicting, and to effectively explain the results and procedures of reasoning to ordinary human beings. Nonmonotonic logics and formal argumentation can be used as fundamental theories to cope with these challenges. The papers in this volume well reflect the state-of-the-art of the research on related topics.
The articles of this volume will be reviewed individually.
Indexed articles:
\textit{Li, Chonghui; Liao, Beishui}, Intrinsic and extrinsic argument strengths in collective argumentation, 1-17 [Zbl 07483820]
\textit{Fuenmayor, David; Steen, Alexander}, A flexible approach to argumentation framework analysis using theorem proving, 18-32 [Zbl 07483821]
\textit{Streit, David}, Experiments in causality and STIT, 33-45 [Zbl 07483822]
\textit{Gabbay, Dov; Kampik, Timotheus}, A brief introduction to the Shkop approach to conflict resolution in formal argumentation, 46-62 [Zbl 07483823]
\textit{Pigozzi, Gabriella; Rouchier, Juliette}, Spaces of argumentation and their interaction, 63-67 [Zbl 07483824]
\textit{Dong, Huimin; Wáng, Yi N.}, A modal logic of defeasible reasoning, 68-80 [Zbl 07483825]
\textit{Pasetto, Luca; Cristani, Matteo; Olivieri, Francesco; Governatori, Guido}, Automated translation of contract texts into defeasible deontic logic, 81-92 [Zbl 07483826]
\textit{Qiao, Lisha; Shen, Yiqi; Yu, Liuwen; Liao, Beishui; van der Torre, Leendert}, Arguing coalitions in abstract argumentation, 93-106 [Zbl 07483827]
\textit{Das, Ramit; Ramanujam, R.}, A logical description of strategizing in social network games, 107-119 [Zbl 07483828]
\textit{Markovich, Réka; Najjar, Amro; van der Torre, Leendert}, New-generation AIs: reasoning about norms and values, 120-131 [Zbl 07483829]
\textit{Yu, Zhe; Ju, Shier}, Getting consensus through a context-based argumentation framework, 132-145 [Zbl 07483830]
\textit{Wei, Bin}, Analysing plausible reasoning with a gradual argumentation model, 146-149 [Zbl 07483831]
\textit{Zheng, Heng; Grossi, Davide; Verheij, Bart}, A formal approach to case comparison in case-based reasoning: research abstract, 150-153 [Zbl 07483832]
\textit{Markovich, Réka; Roy, Olivier}, Formalizing the right to know: epistemic rights as normative positions, 154-159 [Zbl 07483833]Efficient visual recognitionhttps://www.zbmath.org/1483.680102022-05-16T20:40:13.078697ZFrom the text: The goal of this special issue has been to solicit and publish high quality papers addressing the ``efficiency'' of efficient visual recognition from different aspects, and identify future promising research directions.25th international conference on database theory, ICDT 2022, Edinburgh, UK, virtual conference, March 29 -- April 1, 2022https://www.zbmath.org/1483.680112022-05-16T20:40:13.078697ZThe articles of this volume will be reviewed individually. For the preceding conference see [Zbl 1465.68027].Mathematical aspects of computer and information sciences. 8th international conference, MACIS 2019, Gebze, Turkey, November 13--15, 2019. Revised selected papershttps://www.zbmath.org/1483.680122022-05-16T20:40:13.078697ZThe articles of this volume will be reviewed individually. For the preceding conference see [Zbl 1381.68002].
Indexed articles:
\textit{Akoglu, Tulay Ayyildiz; Szanto, Agnes}, Certified Hermite matrices from approximate roots -- univariate case, 3-9 [Zbl 07441055]
\textit{Sato, Yosuke; Sekigawa, Hiroshi; Fukasaku, Ryoya; Nabeshima, Katsusuke}, On parametric border bases, 10-15 [Zbl 07441056]
\textit{Diatta, Sény; Moroz, Guillaume; Pouget, Marc}, Reliable computation of the singularities of the projection in \(\mathbb{R}^3\) of a generic surface of \(\mathbb{R}^4\), 16-34 [Zbl 07441057]
\textit{Ledoux, Viviane; Moroz, Guillaume}, Evaluation of Chebyshev polynomials on intervals and application to root finding, 35-41 [Zbl 07441058]
\textit{Ablinger, Jakob}, Proving two conjectural series for \(\zeta (7)\) and discovering more series for \(\zeta (7)\), 42-47 [Zbl 07441059]
\textit{Nabeshima, Katsusuke; Tajima, Shinichi}, Generalized integral dependence relations, 48-63 [Zbl 07441060]
\textit{Levin, Alexander}, Hilbert-type dimension polynomials of intermediate difference-differential field extensions, 64-79 [Zbl 1477.12004]
\textit{Couto, Ana C. Camargos; Maza, Marc Moreno; Linder, David; Jeffrey, David J.; Corless, Robert M.}, Comprehensive \(LU\) factors of polynomial matrices, 80-88 [Zbl 07441062]
\textit{Pan, Victor Y.; Luan, Qi; Svadlenka, John; Zhao, Liang}, Sublinear cost low rank approximation via subspace sampling, 89-104 [Zbl 07441063]
\textit{Luan, Qi; Pan, Victor Y.}, CUR LRA at sublinear cost based on volume maximization, 105-121 [Zbl 07441064]
\textit{Imbach, Rémi; Pan, Victor Y.}, New practical advances in polynomial root clustering, 122-137 [Zbl 07441065]
\textit{Mou, Chenqi; Lai, Jiahua}, On the chordality of simple decomposition in top-down style, 138-152 [Zbl 07441066]
\textit{Drămnesc, Isabela; Jebelean, Tudor}, Automatic synthesis of merging and inserting algorithms on binary trees using multisets in \textit{Theorema}, 153-168 [Zbl 07441067]
\textit{Huang, Bo; Niu, Wei}, Algebraic analysis of bifurcations and chaos for discrete dynamical systems, 169-184 [Zbl 07441068]
\textit{Esirci, Fatma Nur; Bayrakci, Alp Arslan}, Acceleration of spatial correlation based hardware trojan detection using shared grids ratio, 187-201 [Zbl 07441069]
\textit{Ulu, Metin Evrim; Cenk, Murat}, A parallel GPU implementation of SWIFFTX, 202-217 [Zbl 07441070]
\textit{Borges-Quintana, Mijail; Borges-Trenard, Miguel Ángel; Martínez-Moro, Edgar; Torres-Guerrero, Gustavo}, Computing an invariant of a linear code, 218-233 [Zbl 07441071]
\textit{Mehta, Sanyam; Saraswat, Vishal}, Generalized secret sharing schemes using N\(^\mu\)MDS codes, 234-248 [Zbl 07441072]
\textit{Yıldırım, Hamdi Murat}, Exploiting linearity of modular multiplication, 249-269 [Zbl 07441073]
\textit{Uncu, Ali Kemal}, On a weighted spin of the Lebesgue identity, 273-279 [Zbl 07441074]
\textit{Büyükçolak, Yasemin; Gözüpek, Didem; Özkan, Sibel}, Edge-critical equimatchable bipartite graphs, 280-287 [Zbl 07441075]
\textit{Alnajjarine, Nour; Lavrauw, Michel}, Determining the rank of tensors in \(\mathbb{F}_q^2\otimes \mathbb{F}_q^3\otimes \mathbb{F}_q^3\), 288-294 [Zbl 07441076]
\textit{Tarsissi, Lama; Vuillon, Laurent}, Second order balance property on Christoffel words, 295-312 [Zbl 07441077]
\textit{Wagner, Michael; Kampel, Ludwig; Simos, Dimitris E.}, IPO-Q: a quantum-inspired approach to the IPO strategy used in CA generation, 313-323 [Zbl 07441078]
\textit{Sevim, Taha; Güvel, Muhammet Selçuk; Özkahya, Lale}, A fast counting method for 6-motifs with low connectivity, 324-332 [Zbl 07441079]
\textit{Alexandersson, Per; Restadh, Petter}, LaserTank is NP-complete, 333-338 [Zbl 07441080]
\textit{Florescu, Dorian; England, Matthew}, Improved cross-validation for classifiers that make algorithmic choices to minimise runtime without compromising output correctness, 341-356 [Zbl 07441081]
\textit{Carlini, Luca; Ay, Nihat; Görgen, Christiane}, A numerical efficiency analysis of a common ancestor condition, 357-363 [Zbl 07441082]
\textit{Çelik, Türkü Özlüm; Jamneshan, Asgar; Montúfar, Guido; Sturmfels, Bernd; Venturello, Lorenzo}, Optimal transport to a variety, 364-381 [Zbl 07441083]
\textit{Zhang, Haoliang; Xu, Hongbo; Shi, Jinqiao; Liu, Tingwen; Ya, Jing}, SFV-CNN: deep text sentiment classification with scenario feature representation, 382-394 [Zbl 07441084]
\textit{Islam, Muhammad Zubair; Mehmood, Kashif; Kim, Hyung Seok}, Reinforcement learning based interactive agent for personalized mathematical skill enhancement, 395-407 [Zbl 07441085]
\textit{Isik, Sahin; Ozkan, Kemal}, Common vector approach based image gradients computation for edge detection, 408-421 [Zbl 07441086]
\textit{Mokhtari, Bilal; Melkemi, Kamal Eddine; Michelucci, Dominique; Foufou, Sebti}, Optimizing query perturbations to enhance shape retrieval, 422-437 [Zbl 07441087]
\textit{Kettaf, Chahrazed; Yousfate, Abderrahmane}, Authorship attribution by functional discriminant analysis, 438-449 [Zbl 07441088]
\textit{Mantzaflaris, Angelos}, An overview of geometry plus simulation modules, 453-456 [Zbl 07441089]
\textit{Jiménez-Pastor, Antonio}, DD-finite functions implemented in Sage, 457-462 [Zbl 07441090]Euro-Par 2021: parallel processing. 27th international conference on parallel and distributed computing, Lisbon, Portugal, September 1--3, 2021. Proceedingshttps://www.zbmath.org/1483.680132022-05-16T20:40:13.078697ZThe articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 07332299].Guest editorial: Generative adversarial networks for computer visionhttps://www.zbmath.org/1483.680142022-05-16T20:40:13.078697ZFrom the text: The goal of this special issue is to solicit original work at the intersection of computer vision and deep generative models such as GANs.Robust decentralized diagnosability of networked discrete event systems against DoS and deception attackshttps://www.zbmath.org/1483.680152022-05-16T20:40:13.078697Z"Alves, Marcos V. S."https://www.zbmath.org/authors/?q=ai:alves.marcos-v-s"Barcelos, Raphael J."https://www.zbmath.org/authors/?q=ai:barcelos.raphael-julio"Carvalho, Lilian K."https://www.zbmath.org/authors/?q=ai:carvalho.lilian-kawakami"Basilio, João C."https://www.zbmath.org/authors/?q=ai:basilio.joao-carlosSummary: Denial-of-Service (DoS) are attacks conducted by malicious agents that consists in disrupting, temporally or indefinitely, the services provided by a communication network. When a malicious agent gets access to some network node, it may also perform deception attacks by inserting valid packets with fake information into vulnerable channels. We address, in this paper, DoS and deception attacks (DoS-D attack) that flood some communication channels with fake packets causing delays, loss of observations and insertion of fake observations, and their implications in decentralized fault diagnosability of networked discrete event systems (NDES). To this end, we propose an automaton model for NDES subject to DoS-D attacks that represents the adverse effects of DoS-D attacks on the observations of local diagnosers. We introduce a new codiagnosability definition called DoS-D-robust codiagnosability, and present a necessary and sufficient condition for a language to be DoS-D-robustly codiagnosable. We also propose a verification algorithm for regular languages to check DoS-D-robust codiagnosability.Distributed bare-bones communication in wireless networkshttps://www.zbmath.org/1483.680162022-05-16T20:40:13.078697Z"Chlebus, Bogdan S."https://www.zbmath.org/authors/?q=ai:chlebus.bogdan-stanislaw"Kowalski, Dariusz R."https://www.zbmath.org/authors/?q=ai:kowalski.dariusz-r"Vaya, Shailesh"https://www.zbmath.org/authors/?q=ai:vaya.shaileshSummary: We consider wireless networks operating under the SINR model of interference. Nodes have limited individual knowledge and capabilities: they do not know their positions in a coordinate system in the plane, further they do not know their neighborhoods, nor do they know the size of the network \(n\), and finally they cannot sense collisions resulting from simultaneous transmissions by at least two neighbors. Each node is equipped with a unique integer name, where \(N\) as an upper bound on the a range of names. We refer as a backbone to a subnetwork induced by a diameter-preserving dominating set of nodes. Let \(\Delta\) denote a maximum number of nodes that can successfully receive a message transmitted by a node when no other nodes transmit concurrently. We study distributed algorithms for communication problems in three settings. In the single-node-start case, when one node starts an execution and other nodes are awoken by receiving messages from already awoken nodes, we present a randomized broadcast algorithm that wakes up all nodes in \(\mathcal{O}(n \log^2 N)\) rounds with high probability. For the synchronized-start case, when all nodes start an execution simultaneously, we give a randomized algorithm computing a backbone in \(\mathcal{O}(\Delta\log^7 N)\) rounds with high probability. In the partly-coordinated-start case, when a number of nodes start an execution together and other nodes are awoken by receiving messages from the already awoken nodes, we develop an algorithm that creates a backbone in time \(\mathcal{O}(n\log^2 N +\Delta\log^7 N)\) with high probability.Multi-objective optimization of switched Ethernet topology based on mapping matrixhttps://www.zbmath.org/1483.680172022-05-16T20:40:13.078697Z"Li, Jie Lin"https://www.zbmath.org/authors/?q=ai:li.jielin"Chen, Ming"https://www.zbmath.org/authors/?q=ai:chen.ming"Chen, De Ji"https://www.zbmath.org/authors/?q=ai:chen.deji(no abstract)Modeling and analyzing the dynamic spreading of epidemic malware by a network eigenvalue methodhttps://www.zbmath.org/1483.680182022-05-16T20:40:13.078697Z"Liu, Wanping"https://www.zbmath.org/authors/?q=ai:liu.wanping"Zhong, Shouming"https://www.zbmath.org/authors/?q=ai:zhong.shou-mingSummary: This paper mainly focuses on studying the influence of network characteristics on malware spreading. Firstly, a generalized model with weakly-protected and strongly-protected susceptible nodes is developed by considering the possibility of an intruded node converting back to a weakly-protected susceptible one. The dynamics of the generalized compartmental model is intensively discussed and analyzed, deriving several sufficient conditions for its global stability. Following this work, a novel node-based model is newly proposed to describe malware propagation over an arbitrary connected network including synthesized and real networks. From a microscopic perspective, we establish the novel model by introducing several different variables for each node which describe the probabilities of a node locating at respective states. Our theoretical analysis shows that the largest eigenvalue of the propagating network is a key factor determining malware prevalence. Specifically, the range of the leading eigenvalue can be split into three subintervals in which malware approaches extinction very quickly, or tends to extinction, or persists, depending on into which subinterval the largest eigenvalue of the propagating network falls. Theoretically, the trivial equilibrium of our new node-based model is clearly proved to be exponentially globally stable when the maximum eigenvalue is less than a threshold. We also illustrate the predictive effectiveness of our model by designing some numerical simulations on some regular and scale-free networks. Consequently, we conclude that malware prevalence can be effectively prevented by properly adjusting the spreading network, e.g., reducing the number of nodes and deleting some edges, so that its maximum eigenvalue falls into the appropriate subinterval.A compartmental model to explore the interplay between virus epidemics and honeynet potencyhttps://www.zbmath.org/1483.680192022-05-16T20:40:13.078697Z"Ren, Jianguo"https://www.zbmath.org/authors/?q=ai:ren.jianguo"Xu, Yonghong"https://www.zbmath.org/authors/?q=ai:xu.yonghongSummary: Honeynet technology is an active approach that is used to capture novel viruses and provide feedback on a matching immunization strategy. A compartmental model is formulated and analyzed to explore the interplay between virus epidemics and potency of a heterogeneous honeynet. Theoretical analysis of the model shows the conditions under which the minimum amount and best location in configuring a honeynet are determined. Furthermore, the honeypot with more system vulnerabilities is beneficial for mitigating the virus epidemic to a lower level, whereas the honeynet with a lower power law index is better for acquiring the virus samples. A number of numerical examples are presented to illustrate the theoretical analysis. On the basis of the results, some ideas for imposing restrictions on the spread of virus or improving the design of a honeynet are suggested.Pair quenched mean-field approach to epidemic spreading in multiplex networkshttps://www.zbmath.org/1483.680202022-05-16T20:40:13.078697Z"Wu, Qingchu"https://www.zbmath.org/authors/?q=ai:wu.qingchu"Hadzibeganovic, Tarik"https://www.zbmath.org/authors/?q=ai:hadzibeganovic.tarikSummary: Using the law of total probability, we extend the pair quenched mean-field approach for epidemic spreading in monoplex networks to the scenario of contagion outbreaks in multiplex networks. By means of the quasi-static approximation, we derive the condition for the epidemic threshold in a static multiplex network overlapped by the randomly connected subnetwork without clustering. Our theoretical results are in good agreement with continuous-time Gillespie algorithm-based simulations for 2-layer and 3-layer multiplex networks, revealing the advantage of our model in the prediction of epidemic spreading relative to the quenched mean-field (QMF) approach. Importantly, our study demonstrates that unlike the standard QMF approach, the pair QMF model can be used to assess the influence of the link overlap on the epidemic threshold, thereby carrying vital implications for future epidemiological research and policy development.Rule-based stream reasoning for intelligent administration of content-centric networkshttps://www.zbmath.org/1483.680212022-05-16T20:40:13.078697Z"Beck, Harald"https://www.zbmath.org/authors/?q=ai:beck.harald"Bierbaumer, Bruno"https://www.zbmath.org/authors/?q=ai:bierbaumer.bruno"Dao-Tran, Minh"https://www.zbmath.org/authors/?q=ai:dao-tran.minh"Eiter, Thomas"https://www.zbmath.org/authors/?q=ai:eiter.thomas"Hellwagner, Hermann"https://www.zbmath.org/authors/?q=ai:hellwagner.hermann"Schekotihin, Konstantin"https://www.zbmath.org/authors/?q=ai:schekotihin.konstantinSummary: Content-centric networking (CCN) research addresses the mismatch between the modern usage of the Internet and its outdated architecture. Importantly, CCN routers use various caching strategies to locally cache content frequently requested by end users. However, it is unclear which content shall be stored and when it should be replaced. In this work, we employ novel techniques towards intelligent administration of CCN routers. Our approach allows for autonomous switching between existing strategies in response to changing content request patterns using rule-based stream reasoning framework LARS which extends answer set programming for streams. The obtained possibility for flexible router configuration at runtime allows for faster experimentation and may result in significant performance gains, as shown in our evaluation.
For the entire collection see [Zbl 1350.68015].A hybrid analytical scheme for the numerical computation of time fractional computer virus propagation model and its stability analysishttps://www.zbmath.org/1483.680222022-05-16T20:40:13.078697Z"Dubey, Ved Prakash"https://www.zbmath.org/authors/?q=ai:dubey.ved-prakash"Kumar, Rajnesh"https://www.zbmath.org/authors/?q=ai:kumar.rajnesh"Kumar, Devendra"https://www.zbmath.org/authors/?q=ai:kumar.devendra.3Summary: In this paper, we present an application of the homotopy perturbation transform method to compute the approximate analytical solution of the nonlinear fractional order computer virus propagation (CVP) model. The fractional derivatives are used in Caputo sense. The proposed approximate method generates the numerical solution in the shape of a rapid convergent series by utilizing the provided initial conditions. The main purpose of the paper is to analyze the effect of variation of fractional order \(\alpha\) on the meeting time of susceptible, infected and recovered computers. Moreover, the local stability analysis of the fractional order computer virus model is also presented using Routh-Hurwitz stability criterion.Comments on: ``A secure anti-collusion data sharing scheme for dynamic groups in the cloud''https://www.zbmath.org/1483.680232022-05-16T20:40:13.078697Z"Wang, Qiang"https://www.zbmath.org/authors/?q=ai:wang.qiang.1"Zhang, Hao"https://www.zbmath.org/authors/?q=ai:zhang.hao.3|zhang.hao|zhang.hao.2|zhang.hao.4|zhang.hao.1"Sun, Jianfei"https://www.zbmath.org/authors/?q=ai:sun.jianfei"Xiong, Hu"https://www.zbmath.org/authors/?q=ai:xiong.hu"Qin, Zhiguang"https://www.zbmath.org/authors/?q=ai:qin.zhiguangSummary: Very recently,
\textit{Z. Zhu} and \textit{R. Jiang} [``A secure anti-collusion data sharing scheme for dynamic groups in the cloud'', IEEE Trans. Parallel Distrib. Syst. 27, No. 1, 40--50 (2016; \url{doi:10.1109/TPDS.2015.2388446})]
suggested a secure anti-collusion data sharing scheme for dynamic groups in the cloud. In this letter, we found that Zhu-Jiang's scheme is insecure against forgery attack in registration phase for existing users. Our proposed attack demonstrates that any outside adversary can masquerade as the group manager to issue invalid or expired secret keys to the existing group users. After that, we present our suggestion to tackle the problem without sacrificing any original characters (such as high efficiency and group-dynamicity) of the scheme.A knowledge-based analysis of the blockchain protocolhttps://www.zbmath.org/1483.680242022-05-16T20:40:13.078697Z"Halpern, Joseph Y."https://www.zbmath.org/authors/?q=ai:halpern.joseph-y"Pass, Rafael"https://www.zbmath.org/authors/?q=ai:pass.rafaelSummary: At the heart of the Bitcoin is a \textit{blockchain} protocol, a protocol for achieving consensus on a public ledger that records bitcoin transactions. To the extent that a blockchain protocol is used for applications such as contract signing and making certain transactions (such as house sales) public, we need to understand what guarantees the protocol gives us in terms of agents' knowledge. Here, we provide a complete characterization of agent's knowledge when running a blockchain protocol using a variant of common knowledge that takes into account the fact that agents can enter and leave the system, it is not known which agents are in fact following the protocol (some agents may want to deviate if they can gain by doing so), and the fact that the guarantees provided by blockchain protocols are probabilistic. We then consider some scenarios involving contracts and show that this level of knowledge suffices for some scenarios, but not others.
For the entire collection see [Zbl 1446.68014].Unbeatable consensushttps://www.zbmath.org/1483.680252022-05-16T20:40:13.078697Z"Castañeda, Armando"https://www.zbmath.org/authors/?q=ai:castaneda.armando"Gonczarowski, Yannai A."https://www.zbmath.org/authors/?q=ai:gonczarowski.yannai-a"Moses, Yoram"https://www.zbmath.org/authors/?q=ai:moses.yoramSummary: The \textit{unbeatability} of a consensus protocol, introduced by
\textit{J. Y. Halpern} et al. [SIAM J. Comput. 31, No. 3, 838--865 (2001; Zbl 1017.68007)],
is a stronger notion of optimality than the accepted notion of early stopping protocols. Using a novel knowledge-based analysis, this paper derives the first explicit unbeatable consensus protocols in the literature, for the standard synchronous message-passing model with crash failures. These protocols strictly dominate the best kno-wn protocols for uniform and for Nonuniform Consensus, in some cases improving on them by a large margin. The analysis provides a new understanding of the logical structure of consensus, and of the distinction between uniform and nonuniform Consensus. All protocols presented in this paper have very concise descriptions, and are shown to be efficiently implementable.TuringMobile: a Turing machine of oblivious mobile robots with limited visibility and its applicationshttps://www.zbmath.org/1483.680262022-05-16T20:40:13.078697Z"Di Luna, Giuseppe A."https://www.zbmath.org/authors/?q=ai:di-luna.giuseppe-antonio"Flocchini, Paola"https://www.zbmath.org/authors/?q=ai:flocchini.paola"Santoro, Nicola"https://www.zbmath.org/authors/?q=ai:santoro.nicola"Viglietta, Giovanni"https://www.zbmath.org/authors/?q=ai:viglietta.giovanniSummary: In this paper we investigate the computational power of a set of mobile robots with limited visibility. At each iteration, a robot takes a snapshot of its surroundings, uses the snapshot to compute a destination point, and it moves toward its destination. Robots are punctiform and memoryless, they operate in \(\mathbb{R}^m\), they have local reference systems independent of each other, and are activated asynchronously by an adversarial scheduler. Moreover, robots are non-rigid, in that they may be stopped by the scheduler at each move before reaching their destination (but are guaranteed to travel at least a fixed unknown distance before being stopped). We show that despite these strong limitations, it is possible to arrange \(3m+3k\) of these weak entities in \(\mathbb{R}^m\) to simulate the behavior of a stronger robot that is rigid (i.e., it always reaches its destination) and is endowed with \(k\) registers of persistent memory, each of which can store a real number. We call this arrangement a \textit{TuringMobile}. In its simplest form, a TuringMobile consisting of only three robots can travel in the plane and store and update a single real number. We also prove that this task is impossible with fewer than three robots. Among the applications of the TuringMobile, we focused on Near-Gathering (all robots have to gather in a small-enough disk) and Pattern Formation (of which Gathering is a special case) with limited visibility. Interestingly, our investigation implies that both problems are solvable in Euclidean spaces of any dimension, even if the visibility graph of the robots is initially disconnected, provided that a small amount of these robots are arranged to form a TuringMobile. In the special case of the plane, a basic TuringMobile of only three robots is sufficient.The consensus number of a cryptocurrencyhttps://www.zbmath.org/1483.680272022-05-16T20:40:13.078697Z"Guerraoui, Rachid"https://www.zbmath.org/authors/?q=ai:guerraoui.rachid"Kuznetsov, Petr"https://www.zbmath.org/authors/?q=ai:kuznetsov.petr"Monti, Matteo"https://www.zbmath.org/authors/?q=ai:monti.matteo"Pavlovic, Matej"https://www.zbmath.org/authors/?q=ai:pavlovic.matej"Seredinschi, Dragos-Adrian"https://www.zbmath.org/authors/?q=ai:seredinschi.dragos-adrianSummary: Many blockchain-based algorithms, such as Bitcoin, implement a decentralized asset transfer system, often referred to as a cryptocurrency. As stated in the original paper by Nakamoto, at the heart of these systems lies the problem of preventing double-spending; this is usually solved by achieving consensus on the order of transfers among the participants. In this paper, we treat the asset transfer problem as a concurrent object and determine its consensus number, showing that consensus is, in fact, not necessary to prevent double-spending. We first consider the problem as defined by Nakamoto, where only a single process -- the account owner -- can withdraw from each account. Safety and liveness need to be ensured for correct account owners, whereas misbehaving account owners might be unable to perform transfers. We show that the consensus number of an asset transfer object is 1. We then consider a more general \(k\)-shared asset transfer object where up to \(k\) processes can atomically withdraw from the same account, and show that this object has consensus number \(k\). We establish our results in the context of shared memory with benign faults, allowing us to properly understand the level of difficulty of the asset transfer problem. We also translate these results in the message passing setting with Byzantine players, a model that is more relevant in practice. In this model, we describe an asynchronous Byzantine fault-tolerant asset transfer implementation that is both simpler and more efficient than state-of-the-art consensus-based solutions. Our results are applicable to both the permissioned (private) and permissionless (public) setting, as normally their differentiation is hidden by the abstractions on top of which our algorithms are based.Correction to: ``The consensus number of a cryptocurrency''https://www.zbmath.org/1483.680282022-05-16T20:40:13.078697Z"Guerraoui, Rachid"https://www.zbmath.org/authors/?q=ai:guerraoui.rachid"Kuznetsov, Petr"https://www.zbmath.org/authors/?q=ai:kuznetsov.petr"Monti, Matteo"https://www.zbmath.org/authors/?q=ai:monti.matteo"Pavlovic, Matej"https://www.zbmath.org/authors/?q=ai:pavlovic.matej"Seredinschi, Dragos-Adrian"https://www.zbmath.org/authors/?q=ai:seredinschi.dragos-adrianFrom the text: In the original publication of the article [the authors, ibid. 35, No. 1, 1--15 (2022; Zbl 1483.68027)], the title of the article was incorrectly published as ``The consensus number of a cryptocurrency (extended version)''. The corrected version is ``The consensus number of a cryptocurrency''. This erratum corrects the same.On atomic registers and randomized consensus in m\&m systemshttps://www.zbmath.org/1483.680292022-05-16T20:40:13.078697Z"Hadzilacos, Vassos"https://www.zbmath.org/authors/?q=ai:hadzilacos.vassos"Hu, Xing"https://www.zbmath.org/authors/?q=ai:hu.xing"Toueg, Sam"https://www.zbmath.org/authors/?q=ai:toueg.samSummary: Motivated by recent distributed systems technology, Aguilera et al. introduced a hybrid model of distributed computing, called the \textit{message-and-memory model} or \textit{m\&m model} for short. In this model, processes can communicate by message passing and also by accessing some shared memory (e.g., through some RDMA connections). We first consider the basic problem of implementing an atomic single-writer multi-reader (SWMR) register shared by \textit{all} the processes in m\&m systems. Specifically, we give an algorithm that implements such a register in m\&m systems and show that it is optimal in the number of process crashes that it tolerates. This generalizes the well-known ABD implementation of an atomic SWMR register in a pure message-passing system. We then combine our register implementation for m\&m systems with a randomized consensus algorithm of Aspnes and Herlihy, and obtain a randomized consensus algorithm for m\&m systems that is also optimal in the number of process crashes that it can tolerate. Finally, we determine the minimum number of RDMA connections that is sufficient to implement a SWMR register, or solve randomized consensus, in an m\&m system with \(t\) process crashes, for any given \(t\).Consensus in anonymous asynchronous systems with crash-recovery and omission failureshttps://www.zbmath.org/1483.680302022-05-16T20:40:13.078697Z"Jiménez, Ernesto"https://www.zbmath.org/authors/?q=ai:jimenez.ernesto"López-Presa, José Luis"https://www.zbmath.org/authors/?q=ai:lopez-presa.jose-luis"Patiño-Martínez, Marta"https://www.zbmath.org/authors/?q=ai:patino-martinez.martaSummary: In anonymous distributed systems, processes are indistinguishable because they have no identity and execute the same algorithm. Currently, anonymous systems are receiving a lot of attention mainly because they preserve privacy, which is an important property when we want to avoid impersonation attacks. On the other hand, Consensus is a fundamental problem in distributed computing. It is well-known that Consensus cannot be deterministically solved in pure asynchronous anonymous systems if processes can crash (the so-called crash-stop failure model). This impossibility holds even if message losses never occur in transmission. Failure detectors are an elegant and powerful abstraction for achieving deterministic Consensus in asynchronous distributed systems. A failure detector is a distributed object that gives the processes information about crashed processes. Failure detectors have attracted so much attention in the crash-stop failure model because they provide a totally independent abstraction. \( \varOmega\) is the weakest failure detector to solve Consensus in classic asynchronous systems when a majority of processes never crash, and \(A\varOmega '\) is its implementable version for anonymous systems. As far as we know, there is a lack of works in the literature which tackle Consensus in anonymous asynchronous systems where crashed process can recover (the so-called crash-recovery failure model) and also assuming errors in transmission operations (the so-called omission failure model). Extending failure models in the system allows us to design more realistic systems and solve more practical security problems (i.e., fair exchange and the secure multiparty computation). We present, in this paper, an algorithm to solve Consensus using \(A\varOmega'\) in anonymous asynchronous systems under the crash-recovery and omission failure models. Another important contribution of this paper is a communication-efficient and latency-efficient implementation of \(A\varOmega'\) for these new failure models.Distributed computation and reconfiguration in actively dynamic networkshttps://www.zbmath.org/1483.680312022-05-16T20:40:13.078697Z"Michail, Othon"https://www.zbmath.org/authors/?q=ai:michail.othon"Skretas, George"https://www.zbmath.org/authors/?q=ai:skretas.george"Spirakis, Paul G."https://www.zbmath.org/authors/?q=ai:spirakis.paul-gSummary: We study here systems of distributed entities that can actively modify their communication network. This gives rise to distributed algorithms that apart from communication can also exploit network reconfiguration to carry out a given task. Also, the distributed task itself may now require a global reconfiguration from a given initial network \(G_s\) to a target network \(G_f\) from a desirable family of networks. To formally capture costs associated with creating and maintaining connections, we define three edge-complexity measures: the \textit{total edge activations}, the \textit{maximum activated edges per round}, and the \textit{maximum activated degree of a node}. We give \(\operatorname{(poly)log}(n)\) time algorithms for the task of transforming any \(G_s\) into a \(G_f\) of diameter \(\operatorname{(poly)log}(n)\), while minimizing the edge-complexity. Our main lower bound shows that \(\varOmega(n)\) total edge activations and \(\varOmega(n/\log n)\) activations per round must be paid by any algorithm (even centralized) that achieves an optimum of \(\varTheta(\log n)\) rounds. We give three distributed algorithms for our general task. The first runs in \(O(\log n)\) time, with at most \(2n\) active edges per round, a total of \(O(n\log n)\) edge activations, a maximum degree \(n-1\), and a target network of diameter 2. The second achieves bounded degree by paying an additional logarithmic factor in time and in total edge activations. It gives a target network of diameter \(O(\log n)\) and uses \(O(n)\) active edges per round. Our third algorithm shows that if we slightly increase the maximum degree to \(\operatorname{polylog}(n)\) then we can achieve \(o(\log^2 n)\) running time.Equivalence classes and conditional hardness in massively parallel computationshttps://www.zbmath.org/1483.680322022-05-16T20:40:13.078697Z"Nanongkai, Danupon"https://www.zbmath.org/authors/?q=ai:nanongkai.danupon"Scquizzato, Michele"https://www.zbmath.org/authors/?q=ai:scquizzato.micheleSummary: The \textit{Massively Parallel Computation} (MPC) model serves as a common abstraction of many modern large-scale data processing frameworks, and has been receiving increasingly more attention over the past few years, especially in the context of classical graph problems. So far, the only way to argue lower bounds for this model is to condition on conjectures about the hardness of some specific problems, such as graph connectivity on promise graphs that are either one cycle or two cycles, usually called the \textit{one cycle versus two cycles} problem. This is unlike the traditional arguments based on conjectures about complexity classes (e.g., \(\mathsf{P}\ne \mathsf{NP}\)), which are often more robust in the sense that refuting them would lead to groundbreaking algorithms for a whole bunch of problems. In this paper we present connections between problems and classes of problems that allow the latter type of arguments. These connections concern the class of problems solvable in a sublogarithmic amount of rounds in the MPC model, denoted by \(\mathsf{MPC}(o(\log N))\), and the standard space complexity classes \(\mathsf{L}\) and \(\mathsf{NL}\), and suggest conjectures that are robust in the sense that refuting them would lead to many surprisingly fast new algorithms in the MPC model. We also obtain new conditional lower bounds, and prove new reductions and equivalences between problems in the MPC model. Specifically, our main results are as follows.
\begin{itemize}
\item[--] Lower bounds conditioned on the one cycle versus two cycles conjecture can be instead argued under the \(\mathsf{L}\nsubseteq \mathsf{MPC}(o(\log N))\) conjecture: these two assumptions are equivalent, and refuting either of them would lead to \(o(\log N)\)-round MPC algorithms for a large number of challenging problems, including list ranking, minimum cut, and planarity testing. In fact, we show that these problems and many others require asymptotically the same number of rounds as the seemingly much easier problem of distinguishing between a graph being one cycle or two cycles.
\item[--] Many lower bounds previously argued under the one cycle versus two cycles conjecture can be argued under an even more robust (thus harder to refute) conjecture, namely \(\mathsf{NL}\nsubseteq \mathsf{MPC}(o(\log N))\). Refuting this conjecture would lead to \(o(\log N)\)-round MPC algorithms for an even larger set of problems, including all-pairs shortest paths, betweenness centrality, and all aforementioned ones. Lower bounds under this conjecture hold for problems such as perfect matching and network flow.
\end{itemize}Synthesizing optimal bias in randomized self-stabilizationhttps://www.zbmath.org/1483.680332022-05-16T20:40:13.078697Z"Volk, Matthias"https://www.zbmath.org/authors/?q=ai:volk.matthias"Bonakdarpour, Borzoo"https://www.zbmath.org/authors/?q=ai:bonakdarpour.borzoo"Katoen, Joost-Pieter"https://www.zbmath.org/authors/?q=ai:katoen.joost-pieter"Aflaki, Saba"https://www.zbmath.org/authors/?q=ai:aflaki.sabaSummary: Randomization is a key concept in distributed computing to tackle impossibility results. This also holds for \textit{self-stabilization} in anonymous networks where coin flips are often used to break symmetry. Although the use of randomization in self-stabilizing algorithms is rather common, it is unclear what the optimal coin bias is so as to minimize the expected convergence time. This paper proposes a technique to automatically synthesize this optimal coin bias. Our algorithm is based on a parameter synthesis approach from the field of probabilistic model checking. It over- and under-approximates a given parameter region and iteratively refines the regions with minimal convergence time up to the desired accuracy. We describe the technique in detail and present a simple parallelization that gives an almost linear speed-up. We show the applicability of our technique to determine the optimal bias for the well-known Herman's self-stabilizing token ring algorithm. Our synthesis obtains that for small rings, a fair coin is optimal, whereas for larger rings a biased coin is optimal where the bias grows with the ring size. We also analyze a variant of Herman's algorithm that coincides with the original algorithm but deviates for biased coins. Finally, we show how using \textit{speed reducers} in Herman's protocol improve the expected convergence time.\textit{Renaissance}: a self-stabilizing distributed SDN control plane using in-band communicationshttps://www.zbmath.org/1483.680342022-05-16T20:40:13.078697Z"Canini, Marco"https://www.zbmath.org/authors/?q=ai:canini.marco"Salem, Iosif"https://www.zbmath.org/authors/?q=ai:salem.iosif"Schiff, Liron"https://www.zbmath.org/authors/?q=ai:schiff.liron.1"Schiller, Elad M."https://www.zbmath.org/authors/?q=ai:schiller.elad-michael"Schmid, Stefan"https://www.zbmath.org/authors/?q=ai:schmid.stefanSummary: By introducing programmability, automated verification, and innovative debugging tools, Software-Defined Networks (SDNs) are poised to meet the increasingly stringent dependability requirements of today's communication networks. However, the design of fault-tolerant SDNs remains an open challenge. This paper considers the design of dependable SDNs through the lenses of self-stabilization -- a very strong notion of fault-tolerance. In particular, we develop algorithms for an in-band and distributed control plane for SDNs, called Renaissance, which tolerate a wide range of failures. Our self-stabilizing algorithms ensure that after the occurrence of arbitrary failures, (i) every non-faulty SDN controller can reach any switch (or another controller) within a bounded communication delay (in the presence of a bounded number of failures) and (ii) every switch is managed by a controller. We evaluate Renaissance through a rigorous worst-case analysis as well as a prototype implementation (based on OVS and Floodlight, and Mininet).Improved approaches for density-based outlier detection in wireless sensor networkshttps://www.zbmath.org/1483.680352022-05-16T20:40:13.078697Z"Abid, Aymen"https://www.zbmath.org/authors/?q=ai:abid.aymen"Khediri, Salim El"https://www.zbmath.org/authors/?q=ai:el-khediri.salim"Kachouri, Abdennaceur"https://www.zbmath.org/authors/?q=ai:kachouri.abdennaceurSummary: Density-based algorithms are important data clustering techniques used to find arbitrary shaped clusters and outliers. Recently, outlier detectors through density-based clustering are applied to supervise data streams including wireless sensor networks (WSN's). In this article, we compare two density-based methods, DBSCAN and OPTICS, using proposed configuration and specific classifier to identify outlier and normal clusters. For simulation, in MATLAB, we use real data of WSN's from Intel Berkeley lab in that we introduce white Gaussian noise for different signal-to-noise ratio per data vector. We evaluate the two algorithms under different input parameters using several performance metrics as detection rate, false alarm rate. Results indicate that the DBSCAN scheme is more accurate and comprehensive compared with existing approaches for WSN's. At the same time, OPTICS remains an interesting solution for a hierarchical study of datasets with an identification of anomalies.Dynamic scheduling in distributed transactional memoryhttps://www.zbmath.org/1483.680362022-05-16T20:40:13.078697Z"Busch, Costas"https://www.zbmath.org/authors/?q=ai:busch.costas"Herlihy, Maurice"https://www.zbmath.org/authors/?q=ai:herlihy.maurice-p"Popovic, Miroslav"https://www.zbmath.org/authors/?q=ai:popovic.miroslav"Sharma, Gokarna"https://www.zbmath.org/authors/?q=ai:sharma.gokarnaSummary: We investigate scheduling algorithms for distributed transactional memory systems where transactions residing at nodes of a communication graph operate on shared, mobile objects. A transaction requests the objects it needs, executes once those objects have been assembled, and then sends the objects to other waiting transactions. We study scheduling algorithms with provable performance guarantees. Previously, only the \textit{offline batch scheduling} setting was considered in the literature where transactions are known a priori. Minimizing execution time, even for the offline batch scheduling, is known to be NP-hard for arbitrary communication graphs. In this paper, we analyze for the very first time scheduling algorithms in the \textit{online dynamic scheduling} setting where transactions are not known a priori and the transactions may arrive online over time. We provide efficient and near-optimal execution time schedules for dynamic scheduling in many specialized network architectures. The core of our technique is a method to convert offline schedules to online. We first describe a centralized scheduler which we then adapt to a purely distributed scheduler. To our knowledge, these are the first attempts to obtain provably efficient online execution schedules for distributed transactional memory.Probabilistic model checking of AODVhttps://www.zbmath.org/1483.680372022-05-16T20:40:13.078697Z"Kamali, Mojgan"https://www.zbmath.org/authors/?q=ai:kamali.mojgan"Katoen, Joost-Pieter"https://www.zbmath.org/authors/?q=ai:katoen.joost-pieterSummary: This paper presents the formal modelling and verification of the Ad-hoc On-demand Distance Vector (AODV) routing protocol. Our study focuses on the quantitative aspects of AODV, in particular the influence of uncertainty (such as packet loss rates, collisions) on the probability to establish short routes. We present a compositional model of AODV's functionality using probabilistic timed automata. The strength of this model is that it combines hard real-time constraints with randomised protocol behaviour and can deal with non-determinism (due to e.g., queue behaviours at network nodes). An automated analysis by probabilistic model checking provides useful insights on the sensitivity of AODV's ability to establish shortest/longest routes and deliver data packets via such routes.
For the entire collection see [Zbl 1475.68022].Scheduling with interjob communication on parallel processorshttps://www.zbmath.org/1483.680382022-05-16T20:40:13.078697Z"König, Jürgen"https://www.zbmath.org/authors/?q=ai:konig.jurgen"Mäcker, Alexander"https://www.zbmath.org/authors/?q=ai:macker.alexander"Meyer auf der Heide, Friedhelm"https://www.zbmath.org/authors/?q=ai:meyer-auf-der-heide.friedhelm"Riechers, Sören"https://www.zbmath.org/authors/?q=ai:riechers.sorenSummary: Consider a scheduling problem in which a set of jobs with interjob communication, canonically represented by a weighted tree, needs to be scheduled on \(m\) parallel processors interconnected by a shared communication channel. In each time step, we may allow any processed job to use a certain capacity of the channel in order to satisfy (parts of) its communication demands to adjacent jobs processed in parallel. The goal is to find a schedule with minimum length in which communication demands of all jobs are satisfied.
We show that this problem is NP-hard in the strong sense even if the number of processors and the maximum degree of the underlying tree is constant. Consequently, we design and analyze simple approximation algorithms with asymptotic approximation ratio \(2 - {^1 /_{ 2 (m-1)}}\) in case of paths and a ratio of \({^5 /_ 2}\) in case of arbitrary trees.
For the entire collection see [Zbl 1377.68004].Cost-efficient scheduling on machines from the cloudhttps://www.zbmath.org/1483.680392022-05-16T20:40:13.078697Z"Mäcker, Alexander"https://www.zbmath.org/authors/?q=ai:macker.alexander"Malatyali, Manuel"https://www.zbmath.org/authors/?q=ai:malatyali.manuel"der Heide, Friedhelm Meyer auf"https://www.zbmath.org/authors/?q=ai:meyer-auf-der-heide.friedhelm"Riechers, Sören"https://www.zbmath.org/authors/?q=ai:riechers.sorenSummary: We consider a scheduling problem where machines need to be rented from the cloud in order to process jobs. There are two types of machines available which can be rented for machine-type dependent prices and for arbitrary durations. However, a machine-type dependent setup time is required before a machine is available for processing. Jobs arrive online over time, have machine-type dependent sizes and have individual deadlines. The objective is to rent machines and schedule jobs so as to meet all deadlines while minimizing the rental cost.
Since we observe the slack of jobs to have a fundamental influence on the competitiveness, we study the model when instances are parameterized by their (minimum) slack. An instance is called to have a slack of \(\beta\) if, for all jobs, the difference between the job's release time and the latest point in time at which it needs to be started is at least \(\beta\). While for \(\beta < s\) no finite competitiveness is possible, our main result is an \(O ({^c/_\varepsilon}+{^1 /_ {\varepsilon ^3}})\)-competitive online algorithm for \(\beta = (1+\varepsilon)s\) with \({{^1 /_s} \leq \varepsilon \leq 1}\), where \(s\) and \(c\) denotes the largest setup time and the cost ratio of the machine-types, respectively. It is complemented by a lower bound of \({\varOmega (^c /_ \varepsilon)}\).
For the entire collection see [Zbl 1377.68004].A Matlab toolkit for the analysis of two-level processor sharing queueshttps://www.zbmath.org/1483.680402022-05-16T20:40:13.078697Z"Marin, Andrea"https://www.zbmath.org/authors/?q=ai:marin.andrea"Rossi, Sabina"https://www.zbmath.org/authors/?q=ai:rossi.sabina"Zen, Carlo"https://www.zbmath.org/authors/?q=ai:zen.carloSummary: This paper presents a Matlab toolkit for the numerical analysis of the two-level processor sharing queue (2LPS). The job sizes are expressed in terms of acyclic phase type distributions which can approximate any distribution arbitrary well while arrivals occur according to a homogeneous Poisson process. The toolkit provides a simple yet efficient way to find the optimal parametrization of the 2LPS queueing disciplines given the job size distributions and the intensity of the workload. In practice, the tool can be used to configure the 2LPS scheduler for TCP flows. The time complexity of the solution depends on the cube of the number of phases of the distribution describing the flow sizes.
For the entire collection see [Zbl 1475.68022].M/M/1 vacation queue with multiple thresholds: a fluid analysishttps://www.zbmath.org/1483.680412022-05-16T20:40:13.078697Z"Yazici, Mehmet Akif"https://www.zbmath.org/authors/?q=ai:yazici.mehmet-akif"Phung-Duc, Tuan"https://www.zbmath.org/authors/?q=ai:phung-duc.tuanSummary: We propose an analytical method for an M/M/1 vacation queue with workload dependent service rates. We obtain the distribution of the workload in the system, and consider a power-saving and performance trade-off problem. Numerical experiments reveal that square root service rate function has lower cost than that of linear and quadratic service functions under certain scenarios.
For the entire collection see [Zbl 1475.68022].A game theoretic framework for software diversity for network securityhttps://www.zbmath.org/1483.680422022-05-16T20:40:13.078697Z"Anwar, Ahmed H."https://www.zbmath.org/authors/?q=ai:anwar.ahmed-h"Leslie, Nandi O."https://www.zbmath.org/authors/?q=ai:leslie.nandi-o"Kamhoua, Charles"https://www.zbmath.org/authors/?q=ai:kamhoua.charles-a"Kiekintveld, Christopher"https://www.zbmath.org/authors/?q=ai:kiekintveld.christopher-dSummary: Diversity plays a significant role in network security, and we propose a formal model to investigate and optimize the advantages of software diversity in network security. However, diversity is also costly, and network administrators encounter a tradeoff between network security and the cost to deploy and maintain a well-diversified network. We study this tradeoff in a two-player nonzero-sum game-theoretic model of software diversity. We find the Nash equilibrium of the game to give an optimal security strategy for the defender, and implement an algorithm for optimizing software diversity via embedding a graph-coloring approach based on the Nash equilibrium. We show that the opponent (i.e., adversary) spends more effort to compromise an optimally diversified network. We also analyze the complexity of the proposed algorithm and propose a complexity reduction approach to avoid exponential growth in runtime. We present numerical results that validate the effectiveness of the proposed software diversity approach.
For the entire collection see [Zbl 1475.68029].Equal to the task?https://www.zbmath.org/1483.680432022-05-16T20:40:13.078697Z"Heather, James"https://www.zbmath.org/authors/?q=ai:heather.james"Schneider, Steve"https://www.zbmath.org/authors/?q=ai:schneider.steve-aSummary: Many methods of analysing security protocols have been proposed, but most such methods rely on analysing a protocol running only a finite network. Some, however -- notably, data independence, the strand spaces model, and the rank functions model -- can be used to prove correctness of a protocol running on an unbounded network.
\textit{A. W. Roscoe} and \textit{P. J. Broadfoot} in [``Proving security protocols with model checkers by data independence techniques'', J. Comput. Secur. 7, No. 2--3, 147--190 (1999; \url{doi:10.3233/JCS-1999-72-303})]
show how data independence techniques may be used to verify a security protocol running on an unbounded network. They also consider a weakness inherent in the RSA algorithm, discovered by
\textit{M. Franklin} and \textit{M. Reiter} [``A linear protocol failure for RSA with exponent three'', presented at the Rump Session of Crypto'95, Santa Barbara, CA (1995)],
and show that their data independence approach cannot deal with an intruder endowed with the ability to exploit this weakness.
In this paper, we show that neither can the use of honest ideals in the strand spaces model or the use of rank functions in the CSP model be easily adapted to cover such an intruder. In each case, the inequality tests required to model the new intruder cause problems when attempting to extend analysis of a finite network to cover an unbounded network. The results suggest that more work is needed on adapting the intruder model to allow for cryptographic attacks.
For the entire collection see [Zbl 1014.68936].Decoy allocation games on graphs with temporal logic objectiveshttps://www.zbmath.org/1483.680442022-05-16T20:40:13.078697Z"Kulkarni, Abhishek N."https://www.zbmath.org/authors/?q=ai:kulkarni.abhishek-n"Fu, Jie"https://www.zbmath.org/authors/?q=ai:fu.jie"Luo, Huan"https://www.zbmath.org/authors/?q=ai:luo.huan"Kamhoua, Charles A."https://www.zbmath.org/authors/?q=ai:kamhoua.charles-a"Leslie, Nandi O."https://www.zbmath.org/authors/?q=ai:leslie.nandi-oSummary: We study a class of games, in which the adversary (attacker) is to satisfy a complex mission specified in linear temporal logic, and the defender is to prevent the adversary from achieving its goal. A deceptive defender can allocate decoys, in addition to defense actions, to create disinformation for the attacker. Thus, we focus on the problem of jointly synthesizing a decoy placement strategy and a deceptive defense strategy that maximally exploits the incomplete information the attacker about the decoy locations. We introduce a model of hypergames on graphs with temporal logic objectives to capture such adversarial interactions with asymmetric information. Using the hypergame model, we analyze the effectiveness of a given decoy placement, quantified by the set of deceptive winning states where the defender can prevent the attacker from satisfying the attack objective given its incomplete information about decoy locations. Then, we investigate how to place decoys to maximize the defender's deceptive winning region. Considering the large search space for all possible decoy allocation strategies, we incorporate the idea of compositional synthesis from formal methods and show that the objective function in the class of decoy allocation problem is monotone and non-decreasing. We derive the sufficient conditions under which the objective function for the decoy allocation problem is submodular, or supermodular, respectively. We show a sub-optimal allocation can be efficiently computed by iteratively composing the solutions of hypergames with a subset of decoys and the solution of a hypergame given a single decoy. We use a running example to illustrate the proposed method.
For the entire collection see [Zbl 1475.68029].A message authentication code based on the composition of universal hash familieshttps://www.zbmath.org/1483.680452022-05-16T20:40:13.078697Z"Li, Xue Yuan"https://www.zbmath.org/authors/?q=ai:li.xue-yuan"Wang, Xin Mei"https://www.zbmath.org/authors/?q=ai:wang.xinmei(no abstract)Harnessing the power of deception in attack graph-based security gameshttps://www.zbmath.org/1483.680462022-05-16T20:40:13.078697Z"Milani, Stephanie"https://www.zbmath.org/authors/?q=ai:milani.stephanie"Shen, Weiran"https://www.zbmath.org/authors/?q=ai:shen.weiran"Chan, Kevin S."https://www.zbmath.org/authors/?q=ai:chan.kevin-sean"Venkatesan, Sridhar"https://www.zbmath.org/authors/?q=ai:venkatesan.sridhar"Leslie, Nandi O."https://www.zbmath.org/authors/?q=ai:leslie.nandi-o"Kamhoua, Charles"https://www.zbmath.org/authors/?q=ai:kamhoua.charles-a"Fang, Fei"https://www.zbmath.org/authors/?q=ai:fang.feiSummary: We study the use of deception in attack graph-based Stackelberg security games. In our setting, in addition to allocating defensive resources to protect important targets from attackers, the defender can strategically manipulate the attack graph through three main types of deceptive actions. We show that finding the optimal deception and defense strategy is at least NP-hard. We provide two techniques for efficiently solving this problem: a mixed-integer linear program for layered directed acyclic graphs (DAGs) and neural architecture search for general DAGs. We empirically demonstrate that using deception on attack graphs gives the defender a significant advantage, and the algorithms we develop scale gracefully to medium-sized problems.
For the entire collection see [Zbl 1475.68029].MASAGE: model-agnostic sequential and adaptive game estimationhttps://www.zbmath.org/1483.680472022-05-16T20:40:13.078697Z"Pan, Yunian"https://www.zbmath.org/authors/?q=ai:pan.yunian"Peng, Guanze"https://www.zbmath.org/authors/?q=ai:peng.guanze"Chen, Juntao"https://www.zbmath.org/authors/?q=ai:chen.juntao"Zhu, Quanyan"https://www.zbmath.org/authors/?q=ai:zhu.quanyanSummary: Zero-sum games have been used to model cybersecurity scenarios between an attacker and a defender. However, unknown and uncertain environments have made it difficult to rely on a prescribed zero-sum game to capture the interactions between the players. In this work, we aim to estimate and recover an unknown matrix game that encodes the uncertainties of nature and opponent based on the knowledge of historical games and the current observations of game outcomes. The proposed approach effectively transfers the past experiences that are encoded as expert games to estimate and inform future game plays. We formulate the game knowledge transfer and estimation problem as a sequential least-square problem. We characterize the structural properties of the problem and show that the non-convex problem has well-behaved gradient and Hessian under mild assumptions. We propose gradient-based methods to enable dynamic and adaptive estimation of the unknown game. A case study is used to corroborate the results and illustrate the behavior of the proposed algorithm.
For the entire collection see [Zbl 1475.68029].tPAKE: typo-tolerant password-authenticated key exchangehttps://www.zbmath.org/1483.680482022-05-16T20:40:13.078697Z"Pongmorrakot, Thitikorn"https://www.zbmath.org/authors/?q=ai:pongmorrakot.thitikorn"Chatterjee, Rahul"https://www.zbmath.org/authors/?q=ai:chatterjee.rahulSummary: Password-authenticated key exchange (PAKE) enables a user to authenticate to a server by proving the knowledge of the password without actually revealing their password to the server. PAKE protects user passwords from being revealed to an adversary who compromises the server (or a disgruntled employee). Existing PAKE protocols, however, do not allow even a small typographical mistake in the submitted password, such as accidentally adding a character at the beginning or at the end of the password. Logins are rejected for such password submissions; the user has to retype their password and reengage in the PAKE protocol with the server. Prior works have shown that users often make typographical mistakes while typing their passwords. Allowing users to log in with small typographical mistakes would improve the usability of passwords and help users log in faster. Towards this, we introduce tPAKE: a typo-tolerant PAKE, that allows users to authenticate (or exchange high-entropy keys) using a password while tolerating small typographical mistakes. tPAKEallows edit-distance-based errors, but only those that are frequently made by users. This benefits security, while still improving usability. We discuss the security considerations and challenges in designing tPAKE. We implement tPAKE and show that it is computationally feasible to be used in place of traditional PAKEs while providing improved usability. We also provide an extension to tPAKE, called adaptive-tPAKE, that will enable the server to allow a user to log in with their frequent mistakes (without ever learning those mistakes).
For the entire collection see [Zbl 1475.68018].Exploiting bounded rationality in risk-based cyber camouflage gameshttps://www.zbmath.org/1483.680492022-05-16T20:40:13.078697Z"Thakoor, Omkar"https://www.zbmath.org/authors/?q=ai:thakoor.omkar"Jabbari, Shahin"https://www.zbmath.org/authors/?q=ai:jabbari.shahin"Aggarwal, Palvi"https://www.zbmath.org/authors/?q=ai:aggarwal.palvi"Gonzalez, Cleotilde"https://www.zbmath.org/authors/?q=ai:gonzalez.cleotilde"Tambe, Milind"https://www.zbmath.org/authors/?q=ai:tambe.milind"Vayanos, Phebe"https://www.zbmath.org/authors/?q=ai:vayanos.phebeSummary: Recent works have growingly shown that \textit{Cyber deception} can effectively impede the reconnaissance efforts of intelligent cyber attackers. Recently proposed models to optimize a deceptive defense based on camouflaging network and system attributes, have shown effective numerical results on simulated data. However, these models possess a fundamental drawback due to the assumption that an attempted attack is always successful -- as a direct consequence of the deceptive strategies being deployed, the attacker runs a significant risk that the attack fails. Further, this risk or uncertainty in the rewards magnifies the boundedly rational behavior in humans which the previous models do not handle. To that end, we present Risk-based Cyber Camouflage Games -- a general-sum game model that captures the uncertainty in the attack's success. In case of the rational attackers, we show that optimal defender strategy computation is NP-hard even in the zero-sum case. We provide an MILP formulation for the general problem with constraints on cost and feasibility, along with a pseudo-polynomial time algorithm for the special \textit{unconstrained} setting. Second, for risk-averse attackers, we present a solution based on Prospect theoretic modeling along with a robust variant that minimizes regret. Third, we propose a solution that does not rely on the attacker behavior model or past data, and effective for the broad setting of \textit{strictly competitive games} where previous solutions against bounded rationality prove ineffective. Finally, we provide numerical results that our solutions effectively lower the defender loss.
For the entire collection see [Zbl 1475.68029].SPEC: an equivalence checker for security protocolshttps://www.zbmath.org/1483.680502022-05-16T20:40:13.078697Z"Tiu, Alwen"https://www.zbmath.org/authors/?q=ai:tiu.alwen-fernanto"Nguyen, Nam"https://www.zbmath.org/authors/?q=ai:nguyen.nam-hoai|nguyen.nam-tuan|nguyen.nam-phuong|nguyen.nam-hai|nguyen.nam-trung|nguyen.nam-ky|nguyen.nam-anh|nguyen.nam-v"Horne, Ross"https://www.zbmath.org/authors/?q=ai:horne.rossSummary: SPEC is an automated equivalence checker for security protocols specified in the spi-calculus, an extension of the pi-calculus with cryptographic primitives. The notion of equivalence considered is a variant of bisimulation, called open bisimulation, that identifies processes indistinguishable when executed in any context. SPEC produces compact and independently checkable bisimulations that are useful for automating the process of producing proof-certificates for security protocols. This paper gives an overview of SPEC and discusses techniques to reduce the size of bisimulations, utilising up-to techniques developed for the spi-calculus. SPEC is implemented in the Bedwyr logic programming language that we demonstrate can be adapted to tackle further protocol analysis problems not limited to bisimulation checking.
For the entire collection see [Zbl 1347.68009].Partially observable stochastic games for cyber deception against network epidemichttps://www.zbmath.org/1483.680512022-05-16T20:40:13.078697Z"Tsemogne, Olivier"https://www.zbmath.org/authors/?q=ai:tsemogne.olivier"Hayel, Yezekael"https://www.zbmath.org/authors/?q=ai:hayel.yezekael"Kamhoua, Charles"https://www.zbmath.org/authors/?q=ai:kamhoua.charles-a"Deugoue, Gabriel"https://www.zbmath.org/authors/?q=ai:deugoue.gabrielSummary: A Decentralized Denial of Service is an attack done by an agent capable to control the spread of a malware. This is a combination of epidemiological and conflictual aspects between several decision makers. There exists in the literature papers that study (non oriented) epidemics and papers that study network attacks regardless the epidemiological aspect. We put together the two aspects and provide a new game theoretical model which is part of the family of partially observable stochastic games (POSG) but with particular features. We prove the consistency of heuristic search value iteration (HSVI) based algorithms. Our framework is applied to optimally design a cyber deception technique based on honeypots in order to control an epidemic cyber-attack of a network by a strategic attacker. Some basic simulations are proposed to illustrate the framework described in this work-in-progress paper.
For the entire collection see [Zbl 1475.68029].A debugger-cooperative higher-order contract system in Pythonhttps://www.zbmath.org/1483.680522022-05-16T20:40:13.078697Z"Arai, Ryoya"https://www.zbmath.org/authors/?q=ai:arai.ryoya"Sato, Shigeyuki"https://www.zbmath.org/authors/?q=ai:sato.shigeyuki"Iwasaki, Hideya"https://www.zbmath.org/authors/?q=ai:iwasaki.hideyaSummary: Contract programming is one of the most promising ways of enhancing the reliability of Python, which becomes increasingly desired. Higher-order contract systems that support fully specifying the behaviors of iterators and functions are desirable for Python but have not been presented yet. Besides, even with them, debugging with contracts in Python would still be burdensome because of delayed contract checking. To resolve this problem, we present \textsc{pyblame}, a higher-order contract system in Python, and ccdb, a source-level debugger equipped with features dedicated to debugging with delayed contract checking. \textsc{pyblame} and ccdb are designed on the basis of the standard of Python and thus friendly to many Python programmers. We have experimentally confirmed the advantage and the efficacy of \textsc{pyblame} and ccdb through the web framework Bottle.
For the entire collection see [Zbl 1347.68009].Towards a UTP semantics for Modelicahttps://www.zbmath.org/1483.680532022-05-16T20:40:13.078697Z"Foster, Simon"https://www.zbmath.org/authors/?q=ai:foster.simon"Thiele, Bernhard"https://www.zbmath.org/authors/?q=ai:thiele.bernhard"Cavalcanti, Ana"https://www.zbmath.org/authors/?q=ai:cavalcanti.ana"Woodcock, Jim"https://www.zbmath.org/authors/?q=ai:woodcock.james-c-pSummary: We describe our work on a UTP semantics for the dynamic systems modelling language Modelica. This is a language for modelling a system's continuous behaviour using a combination of differential-algebraic equations and an event-handling system. We develop a novel UTP theory of hybrid relations, inspired by Hybrid CSP and Duration Calculus, that is purely relational and provides uniform handling of continuous and discrete variables. This theory is mechanised in our Isabelle implementation of the UTP, Isabelle/UTP, with which we verify some algebraic properties. Finally, we show how a subset of Modelica models can be given semantics using our theory. When combined with the wealth of existing UTP theories for discrete system modelling, our work enables a sound approach to heterogeneous semantics for Cyber-Physical systems by leveraging the theory linking facilities of the UTP.
For the entire collection see [Zbl 1355.68010].Probabilistic programming language and its incremental evaluationhttps://www.zbmath.org/1483.680542022-05-16T20:40:13.078697Z"Kiselyov, Oleg"https://www.zbmath.org/authors/?q=ai:kiselyov.olegSummary: This system description paper introduces the probabilistic programming language Hakaru10, for expressing, and performing inference on (general) graphical models. The language supports discrete and continuous distributions, mixture distributions and conditioning. Hakaru10 is a DSL embedded in Haskell and supports Monte-Carlo Markov chain (MCMC) inference.
Hakaru10 is designed to address two main challenges of probabilistic programming: performance and correctness. It implements the incremental Metropolis-Hastings method, avoiding all redundant computations. In the presence of conditional branches, efficiently maintaining dependencies and correctly computing the acceptance ratio are non-trivial problems, solved in Hakaru10. The implementation is unique in being explicitly designed to satisfy the common equational laws of probabilistic programs. Hakaru10 is typed; specifically, its type system statically prevents meaningless conditioning, enforcing that the values to condition upon must indeed come from outside the model.
For the entire collection see [Zbl 1347.68009].\textsc{Eliom}: a core ML language for tierless web programminghttps://www.zbmath.org/1483.680552022-05-16T20:40:13.078697Z"Radanne, Gabriel"https://www.zbmath.org/authors/?q=ai:radanne.gabriel"Vouillon, Jérôme"https://www.zbmath.org/authors/?q=ai:vouillon.jerome"Balat, Vincent"https://www.zbmath.org/authors/?q=ai:balat.vincentSummary: \textsc{Eliom} is a dialect of \textsc{OCaml} for web programming in which server and client pieces of code can be mixed in the same file using syntactic annotations. This allows to build a whole application as a single distributed program, in which it is possible to define in a composable way reusable widgets with both server and client behaviors. Our language also enables simple and type-safe communication. \textsc{Eliom} matches the specificities of the Web by allowing the programmer to interleave client and server code while maintaining efficient one-way server-to-client communication. The \textsc{Eliom} language is both sufficiently small to be implemented on top of an existing language and sufficiently powerful to allow expressing many idioms of web programming.
In this paper, we present a formalization of the core language of \textsc{Eliom}. We provide a type system, the execution model and a compilation scheme.
For the entire collection see [Zbl 1347.68009].Static trace-based deadlock analysis for synchronous Mini-Gohttps://www.zbmath.org/1483.680562022-05-16T20:40:13.078697Z"Stadtmüller, Kai"https://www.zbmath.org/authors/?q=ai:stadtmuller.kai"Sulzmann, Martin"https://www.zbmath.org/authors/?q=ai:sulzmann.martin"Thiemann, Peter"https://www.zbmath.org/authors/?q=ai:thiemann.peter-jSummary: We consider the problem of static deadlock detection for programs in the Go programming language which make use of synchronous channel communications. In our analysis, regular expressions extended with a fork operator capture the communication behavior of a program. Starting from a simple criterion that characterizes traces of deadlock-free programs, we develop automata-based methods to check for deadlock-freedom. The approach is implemented and evaluated with a series of examples.
For the entire collection see [Zbl 1347.68009].Certifying standard and stratified Datalog inference engines in SSReflecthttps://www.zbmath.org/1483.680572022-05-16T20:40:13.078697Z"Benzaken, Véronique"https://www.zbmath.org/authors/?q=ai:benzaken.veronique"Contejean, Évelyne"https://www.zbmath.org/authors/?q=ai:contejean.evelyne"Dumbrava, Stefania"https://www.zbmath.org/authors/?q=ai:dumbrava.stefaniaSummary: We propose a SSReflect library for logic programming in the Datalog setting. As part of this work, we give a first mechanization of standard Datalog and of its extension with stratified negation. The library contains a formalization of the model theoretical and fixpoint semantics of the languages, implemented through bottom-up and, respectively, through stratified evaluation procedures. We provide corresponding soundness, termination, completeness and model minimality proofs. To this end, we rely on the Coq proof assistant and SSReflect. In this context, we also construct a preliminary framework for dealing with stratified programs. We consider this to be a necessary first step towards the certification of security-aware data-centric applications.
For the entire collection see [Zbl 1369.68009].Iteratively-supported formulas and strongly supported models for Kleene answer set programs (extended abstract)https://www.zbmath.org/1483.680582022-05-16T20:40:13.078697Z"Doherty, Patrick"https://www.zbmath.org/authors/?q=ai:doherty.patrick"Kvarnström, Jonas"https://www.zbmath.org/authors/?q=ai:kvarnstrom.jonas"Szałas, Andrzej"https://www.zbmath.org/authors/?q=ai:szalas.andrzejSummary: In this extended abstract, we discuss the use of iteratively-supported formulas (ISFs) as a basis for computing strongly-supported models for Kleene answer set programs (\(\mathrm{ASP}^{K}\)). \(\mathrm{ASP}^{K}\) programs have a syntax identical to classical ASP programs. The semantics of \(\mathrm{ASP}^{K}\) programs is based on the use of Kleene three-valued logic and strongly-supported models. For normal \(\mathrm{ASP}^{K}\) programs, their strongly supported models are identical to classical answer sets using stable model semantics. For disjunctive \(\mathrm{ASP}^{K}\) programs, the semantics weakens the minimality assumption resulting in a classical interpretation for disjunction. We use ISFs to characterize strongly-supported models and show that they are polynomially bounded.
For the entire collection see [Zbl 1350.68015].Forgetting in ASP: the forgotten propertieshttps://www.zbmath.org/1483.680592022-05-16T20:40:13.078697Z"Gonçalves, Ricardo"https://www.zbmath.org/authors/?q=ai:goncalves.ricardo"Knorr, Matthias"https://www.zbmath.org/authors/?q=ai:knorr.matthias"Leite, João"https://www.zbmath.org/authors/?q=ai:leite.joaoSummary: Many approaches for forgetting in answer set programming (ASP) have been proposed in recent years, in the form of specific operators, or classes of operators, following different principles and obeying different properties. A recently published comprehensive overview of existing operators and properties provided a uniform picture of the landscape, including many novel (even surprising) results on relations between properties and operators. Yet, this overview largely missed an additional set properties for forgetting, proposed by Wong, and this paper aims to close this gap. It turns out that, while some of these properties are closely related to the properties previously studied, four of them are distinct providing novel results and insights, further strengthening established relations between existing operators.
For the entire collection see [Zbl 1350.68015].Automated modular termination proofs for real Prolog programshttps://www.zbmath.org/1483.680602022-05-16T20:40:13.078697Z"Müller, Martin"https://www.zbmath.org/authors/?q=ai:muller.martin-eric|muller.martin-c|muller.martin-michael"Glaß, Thomas"https://www.zbmath.org/authors/?q=ai:glass.thomas-a"Stroetmann, Karl"https://www.zbmath.org/authors/?q=ai:stroetmann.karlSummary: We present a methodology for checking the termination of Prolog programs that can be automated and is scalable. Furthermore, the proposed method can be used to locate errors. It has been successfully implemented as part of a tool that uses static analysis based on formal methods in order to validate Prolog programs. This tool is aimed at supporting the design and maintenance of Prolog programs.
Our approach is based on a natural extension of the notion of \textit{acceptable} programs developed in
[\textit{K. R. Apt} and \textit{D. Pedreschi}, ``Studies in pure Prolog: termination'', in: J. W. Lloyd (ed.), Computational logic. Berlin: Springer. 150--176 (1990; \url{doi:10.1007/978-3-642-76274-1_9});
in: Advances in logic programming theory. Lectures delivered at the fourth international school on foundations of logic programming, 1992. Oxford: Clarendon Press. 183--229 (1994; Zbl 0849.68011)].
The main idea is to assign a measure of complexity to predicate calls. Then the termination of a program is shown by proving this measure to be decreasing on recursive calls. While this measure is a natural number in
[Apt and Pedreschi, loc. cit. (1990); loc. cit. (1994)],
we extend this idea by using \textit{tuples} of natural numbers as a measure of complexity. These tuples are compared lexicographicly. The use of this kind of measure enables us to refine the notion of \textit{acceptable} programs to the notion of \textit{loop free} programs. This notion can be used to modularize the termination proof of Prolog programs to a greater extend than previously possible.
For the entire collection see [Zbl 0855.68002].Determining inference semantics for disjunctive logic programshttps://www.zbmath.org/1483.680612022-05-16T20:40:13.078697Z"Shen, Yi-Dong"https://www.zbmath.org/authors/?q=ai:shen.yidong"Eiter, Thomas"https://www.zbmath.org/authors/?q=ai:eiter.thomasSummary: In a seminal paper,
\textit{M. Gelfond} and \textit{V. Lifschitz} [New Gener. Comput. 9, No. 3--4, 365--385 (1991; Zbl 0735.68012)]
introduced simple disjunctive logic programs, where in rule heads the disjunction operator ``|'' is used to express incomplete information, and defined the answer set semantics (called \textit{GL-semantics} for short) based on a program transformation (called \textit{GL-reduct}) and the minimal model requirement. Our observations reveal that the requirement of the GL-semantics, i.e., an answer set should be a minimal model of rules of the GL-reduct, may sometimes be too strong a condition and exclude some answer sets that would be reasonably acceptable. To address this, we present an alternative, more permissive answer set semantics, called the \textit{determining inference (DI) semantics}. Specifically, we introduce a head selection function to formalize the operator | and define answer sets as follows: (i) Given an interpretation \(I\) and a selection function \textit{sel}, we transform a disjunctive program \(\Pi\) into a normal program \(\Pi_{sel}^I\), called a \textit{disjunctive program reduct}; (ii) given a base answer set semantics \(\mathcal{X}\) for normal programs, we define \(I\) to be a \textit{candidate answer set} of \(\Pi\) w.r.t. \( \mathcal{X}\) if \(I\) is an answer set of \(\Pi_{sel}^I\) under \(\mathcal{X}\); and (iii) we define \(I\) to be an answer set of \(\Pi\) w.r.t. \( \mathcal{X}\) if \(I\) is a minimal candidate answer set. The DI-semantics is general and applicable to extend any answer set semantics \(\mathcal{X}\) for normal programs to disjunctive programs. By replacing \(\mathcal{X}\) with the GL \({}_{nlp}\)-semantics defined by
\textit{M. Gelfond} and \textit{V. Lifschitz} [``The stable model semantics for logic programming'', in: Proceedings of international logic programming conference and symposium. Cambridge, MA: MIT Press. 1070--1080 (1998)],
we induce a DI-semantics for simple disjunctive programs, and by replacing \(\mathcal{X}\) with the well-justified semantics defined by
\textit{Y.-D. Shen} et al. [Artif. Intell. 213, 1--41 (2014; Zbl 1391.68016)],
we further induce a DI-semantics for general disjunctive programs. We also establish a novel characterization of the GL-semantics in terms of a disjunctive program reduct, which reveals the essential difference of the DI-semantics from the GL-semantics and leads us to giving a satisfactory solution to the open problem presented by
\textit{P. Hitzler} and \textit{A. K. Seda} [``Multivalued mappings, fixed-point theorems and disjunctive databases'', in: Proceedings of the 3rd Irish workshop on formal methods, IWFM'99. Galway: Electronic Workshops in Computing (eWiC). 1--18 (1999; \url{doi:10.14236/ewic/IWFM1999.8})]
about characterizing split normal derivatives of a simple disjunctive program \(\Pi\) such that answer sets of the normal derivatives are answer sets of \(\Pi\) under the GL-semantics. Finally we give computational complexity results; in particular we show that in the propositional case deciding whether a simple disjunctive program \(\Pi\) has some DI-answer set is NP-complete. This is in contrast to the GL-semantics and equivalent formulations such as the FLP-semantics
[\textit{W. Faber} et al., Artif. Intell. 175, No. 1, 278--298 (2011; Zbl 1216.68263)],
where deciding whether \(\Pi\) has some answer set is \(\Sigma_2^p\)-complete, while brave and cautious reasoning are \(\Sigma_2^p\)- and \(\Pi_2^p\)-complete, respectively, for both GL- and DI-answer sets. For general disjunctive programs with compound formulas as building blocks, the complexity of brave and cautious reasoning increases under DI-semantics by one level of the polynomial hierarchy, which thus offers higher problem solving capacity.Measuring inconsistency in answer set programshttps://www.zbmath.org/1483.680622022-05-16T20:40:13.078697Z"Ulbricht, Markus"https://www.zbmath.org/authors/?q=ai:ulbricht.markus"Thimm, Matthias"https://www.zbmath.org/authors/?q=ai:thimm.matthias"Brewka, Gerhard"https://www.zbmath.org/authors/?q=ai:brewka.gerhardSummary: We address the issue of quantitatively assessing the severity of inconsistencies in logic programs under the answer set semantics. While measuring inconsistency in classical logics has been investigated for some time now, taking the non-monotonicity of answer set semantics into account brings new challenges that have to be addressed by reasonable accounts of inconsistency measures. We investigate the behavior of inconsistency in logic programs by revisiting existing rationality postulates for inconsistency measurement and developing novel ones taking non-monotonicity into account. Further, we develop new measures for this setting and investigate their properties.
For the entire collection see [Zbl 1350.68015].Open call-by-valuehttps://www.zbmath.org/1483.680632022-05-16T20:40:13.078697Z"Accattoli, Beniamino"https://www.zbmath.org/authors/?q=ai:accattoli.beniamino"Guerrieri, Giulio"https://www.zbmath.org/authors/?q=ai:guerrieri.giulioSummary: The elegant theory of the call-by-value lambda-calculus relies on weak evaluation and closed terms, that are natural hypotheses in the study of programming languages. To model proof assistants, however, strong evaluation and open terms are required, and it is well known that the operational semantics of call-by-value becomes problematic in this case. Here we study the intermediate setting -- that we call open call-by-value -- of weak evaluation with open terms, on top of which Grégoire and Leroy designed the abstract machine of Coq. Various calculi for open call-by-value already exist, each one with its pros and cons. This paper presents a detailed comparative study of the operational semantics of four of them, coming from different areas such as the study of abstract machines, denotational semantics, linear logic proof nets, and sequent calculus. We show that these calculi are all equivalent from a termination point of view, justifying the slogan open call-by-value.
For the entire collection see [Zbl 1347.68009].A sound and complete bisimulation for contextual equivalence in \(\lambda\)-calculus with call/cchttps://www.zbmath.org/1483.680642022-05-16T20:40:13.078697Z"Yachi, Taichi"https://www.zbmath.org/authors/?q=ai:yachi.taichi"Sumii, Eijiro"https://www.zbmath.org/authors/?q=ai:sumii.eijiroSummary: We develop a sound and complete proof method of contextual equivalence in \(\lambda\)-calculus with the abortive control operator call/cc (as opposed to delimited control operators like \(\mathtt {shift}\) and \(\mathtt {reset}\)), and prove the non-trivial equivalence between \(\lambda f.f()\) and \(\lambda f.f();f()\) for example, both for the first time to our knowledge. Although our method is based on environmental bisimulations [\textit{E. Sumii} and \textit{B. C. Pierce}, in: Proceedings of the 31st ACM SIGPLAN-SIGACT symposium on principles of programming languages, POPL '04. New York, NY: Association for Computing Machinery (ACM). 161--172 (2004; Zbl 1325.68050)], it makes an essential and general change to their metatheory, which is not only necessary for handling call/cc but is also applicable in other languages with no control operator.
For the entire collection see [Zbl 1347.68009].Unified syntax with iso-typeshttps://www.zbmath.org/1483.680652022-05-16T20:40:13.078697Z"Yang, Yanpeng"https://www.zbmath.org/authors/?q=ai:yang.yanpeng"Bi, Xuan"https://www.zbmath.org/authors/?q=ai:bi.xuan"Oliveira, Bruno C.d. S."https://www.zbmath.org/authors/?q=ai:oliveira.bruno-c-d-sSummary: Traditional designs for functional languages (such as Haskell or ML) have separate sorts of syntax for terms and types. In contrast, many dependently typed languages use a unified syntax that accounts for both terms and types. Unified syntax has some interesting advantages over separate syntax, including less duplication of concepts, and added expressiveness. However, integrating unrestricted general recursion in calculi with unified syntax is challenging when some level of type-level computation is present, as decidable type-checking is easily lost.
This paper argues that the advantages of unified syntax also apply to traditional functional languages, and there is no need to give up decidable type-checking. We present a dependently typed calculus that uses unified syntax, supports general recursion and has decidable type-checking. The key to retain decidable type-checking is a generalization of iso-recursive types called iso-types. Iso-types replace the conversion rule typically used in dependently typed calculus, and make every computation explicit via cast operators. We study two variants of the calculus that differ on the reduction strategy employed by the cast operators, and give different trade-offs in terms of simplicity and expressiveness.
For the entire collection see [Zbl 1347.68009].Observation-based concurrent program logic for relaxed memory consistency modelshttps://www.zbmath.org/1483.680662022-05-16T20:40:13.078697Z"Abe, Tatsuya"https://www.zbmath.org/authors/?q=ai:abe.tatsuya"Maeda, Toshiyuki"https://www.zbmath.org/authors/?q=ai:maeda.toshiyukiSummary: Concurrent program logics are frameworks for constructing proofs, which ensure that concurrent programs work correctly. However, most conventional concurrent program logics do not consider the complexities of modern memory structures, and the proofs in the logics do not ensure that programs will work correctly. To the best of our knowledge, independent reads independent writes (IRIW), which is known to have non-intuitive behavior under relaxed memory consistency models, has not been fully studied under the context of concurrent program logics. One reason is the gap between theoretical memory consistency models that program logics can handle and the realistic memory consistency models adopted by actual computer architectures. In this paper, we propose observation variables and invariants that fill this gap, releasing us from the need to construct operational semantics and logic for each specific memory consistency model. We describe general operational semantics for relaxed memory consistency models, define concurrent program logic sound to the operational semantics, show that observation invariants can be formalized as axioms of the logic, and verify IRIW under an observation invariant. We also obtain a novel insight through constructing the logic. To define logic that is sound to the operational semantics, we dismiss shared variables in programs from assertion languages, and adopt variables observed by threads. This suggests that the so-called bird's-eye view of the whole computing system disturbs the soundness of the logic.
For the entire collection see [Zbl 1347.68009].AkkaProf: a profiler for Akka actors in parallel and distributed applicationshttps://www.zbmath.org/1483.680672022-05-16T20:40:13.078697Z"Rosà, Andrea"https://www.zbmath.org/authors/?q=ai:rosa.andrea"Chen, Lydia Y."https://www.zbmath.org/authors/?q=ai:chen.lydia-y"Binder, Walter"https://www.zbmath.org/authors/?q=ai:binder.walterSummary: Nowadays, numerous programming languages and frameworks offer concurrency based on the actor model. Among the actor libraries for the Java Virtual Machine, Akka is the most used one, as it is employed in various parallel and distributed applications and frameworks. Unfortunately, despite the spread of actors libraries, Akka in particular, existing profiling tools are not very effective at identifying performance drawbacks in applications using actors. In this tool paper, we aim at filling this gap by presenting AkkaProf, a profiling tool for Akka actors. AkkaProf provides detailed metrics on actor utilization and on the communication between them, two fundamental aspects of actor-based applications that are overlooked by other profilers. AkkaProf aids performance analysis in several applications and frameworks in both parallel and distributed environments.
For the entire collection see [Zbl 1347.68009].Non-blocking concurrent imperative programming with session typeshttps://www.zbmath.org/1483.680682022-05-16T20:40:13.078697Z"Silva, Miguel"https://www.zbmath.org/authors/?q=ai:silva.miguel-t|silva.miguel-l|silva.miguel-p-t"Florido, Mário"https://www.zbmath.org/authors/?q=ai:florido.mario"Pfenning, Frank"https://www.zbmath.org/authors/?q=ai:pfenning.frankSummary: Concurrent C0 is an imperative programming language in the C family with session-typed message-passing concurrency. The previously proposed semantics implements asynchronous (non-blocking) output; we extend it here with non-blocking input. A key idea is to postpone message reception as much as possible by interpreting receive commands as a request for a message. We implemented our ideas as a translation from a blocking intermediate language to a non-blocking language. Finally, we evaluated our techniques with several benchmark programs and show the results obtained. While the abstract measure of span always decreases (or remains unchanged), only a few of the examples reap a practical benefit.
For the entire collection see [Zbl 1436.68021].Design and implementation of concurrent C0https://www.zbmath.org/1483.680692022-05-16T20:40:13.078697Z"Willsey, Max"https://www.zbmath.org/authors/?q=ai:willsey.max"Prabhu, Rokhini"https://www.zbmath.org/authors/?q=ai:prabhu.rokhini"Pfenning, Frank"https://www.zbmath.org/authors/?q=ai:pfenning.frankSummary: We describe Concurrent C0, a type-safe C-like language with contracts and session-typed communication over channels. Concurrent C0 supports an operation called forwarding which allows channels to be combined in a well-defined way. The language's type system enables elegant expression of session types and message-passing concurrent programs. We provide a Go-based implementation with language based optimizations that outperforms traditional message passing techniques.
For the entire collection see [Zbl 1436.68021].Generalized dominators for structured programshttps://www.zbmath.org/1483.680702022-05-16T20:40:13.078697Z"Alstrup, Stephen"https://www.zbmath.org/authors/?q=ai:alstrup.stephen"Lauridsen, Peter W."https://www.zbmath.org/authors/?q=ai:lauridsen.peter-w"Thorup, Mikkel"https://www.zbmath.org/authors/?q=ai:thorup.mikkelSummary: Recently it has been discovered that control flow graphs of structured programs have bounded treewidth. In this paper we show that this knowledge can be used to design fast algorithms for control flow analysis. We give a linear time algorithm for the problem of finding the immediate multiple-vertex dominator set for all nodes in a control flow graph. The problem was originally proposed by
\textit{R. Gupta} [``Generalized dominators and post-dominators'', in: Proceedings of the 19th ACM SIGPLAN-SIGACT symposium on principles of programming languages, POPL'92. New York, NY: Association for Computing Machinery (ACM). 246--257 (1992; \url{doi:10.1145/143165.143216})].
Without the restriction of bounded treewidth the fastest algorithm runs in \(O(|V|*|E|)\) on a graph with \(|V|\) nodes and \(|E|\) edges and is due to
the first author et al. [Inf. Process. Lett. 59, No. 1, 9--11 (1996; Zbl 1046.68976)].
For the entire collection see [Zbl 0855.68002].Shorter circuit obfuscation in challenging security modelshttps://www.zbmath.org/1483.680712022-05-16T20:40:13.078697Z"Brakerski, Zvika"https://www.zbmath.org/authors/?q=ai:brakerski.zvika"Dagmi, Or"https://www.zbmath.org/authors/?q=ai:dagmi.orSummary: The study of program obfuscation is seeing great progress in recent years, which is crucially attributed to the introduction of graded encoding schemes by
\textit{S. Garg} et al. [Lect. Notes Comput. Sci. 7881, 1--17 (2013; Zbl 1300.94055)].
In such schemes, elements of a ring can be encoded such that the content of the encoding is hidden, but restricted algebraic manipulations, followed by zero-testing, can be performed publicly. This primitive currently underlies all known constructions of general-purpose obfuscators. However, the security properties of the current candidate graded encoding schemes are not well understood, and new attacks frequently introduced. It is therefore important to assume as little as possible about the security of the graded encoding scheme, and use as conservative security models as possible. This often comes at a cost of reducing the efficiency or the functionality of the obfuscator.
In this work, we present a candidate obfuscator, based on composite-order graded encoding schemes, which obfuscates circuits directly a la
[\textit{J. Zimmerman}, Lect. Notes Comput. Sci. 9057, 439--467 (2015; Zbl 1371.68054); \textit{B. Applebaum} and \textit{Z. Brakerski}, Lect. Notes Comput. Sci. 9015, 528--556 (2015; Zbl 1382.94049); J. Cryptology 34, No. 2, Paper No. 14, 41 p. (2021; Zbl 1467.94025)].
Our construction requires a graded encoding scheme with only 3 ``plaintext slots'' (= sub-rings of the underlying ring), which is directly related to the size and complexity of the obfuscated program. We prove that our obfuscator is superior to previous works in two different security models.
\begin{itemize}
\item[1.]We prove that our obfuscator is indistinguishability-secure (iO) in the unique representation generic graded encoding model. Previous works either required a composite-order scheme with polynomially many slots, or were provable in a milder security model. This immediately translates to a polynomial improvement in efficiency, and shows that improved security does not come at the cost of efficiency in this case.
\item [2.]Following
[\textit{S. Badrinarayanan} et al., Lect. Notes Comput. Sci. 9666, 764--791 (2016; Zbl 1371.94622)],
we consider a model where finding any ``non-trivial'' encoding of zero breaks the security of the encoding scheme. We show that, perhaps surprisingly, secure obfuscation is possible in this model even for some classes of non-evasive functions (for example, any class of conjunctions). We define the property required of the function class, formulate an appropriate (generic) security model, and prove that our aforementioned obfuscator is virtual-black-box (VBB) secure in this model.
\end{itemize}
For the entire collection see [Zbl 1344.94004].UTPCalc -- a calculator for UTP predicateshttps://www.zbmath.org/1483.680722022-05-16T20:40:13.078697Z"Butterfield, Andrew"https://www.zbmath.org/authors/?q=ai:butterfield.andrewSummary: We present the development of the UTP-Calculator: a tool, written in Haskell, that supports rapid prototyping of new theories in the Unifying Theories of Programming paradigm, by supporting an easy way to very quickly perform test calculations. The emphasis during the calculator development was keeping it simple but effective, and relying on the user to have the expertise to check its output. It is not intended to supplant existing theorem prover or language transformation technology. The tool is designed for someone who is both very familiar with UTP theory construction, and familiar enough with Haskell to be able to write pattern-matching code. In this paper we describe how this tool can be used to assist in theory development, by describing the key components of the calculator and how various aspects of such a theory might be encoded. We finish with a discussion of our experience in using the tool.
For the entire collection see [Zbl 1355.68010].Termination analysis of nonlinear loops and linear loops in \( \mathbb R^m\)https://www.zbmath.org/1483.680732022-05-16T20:40:13.078697Z"Chen, Jing Min"https://www.zbmath.org/authors/?q=ai:chen.jingmin(no abstract)A two-way path between formal and informal design of embedded systemshttps://www.zbmath.org/1483.680742022-05-16T20:40:13.078697Z"Chen, Mingshuai"https://www.zbmath.org/authors/?q=ai:chen.mingshuai"Ravn, Anders P."https://www.zbmath.org/authors/?q=ai:ravn.anders-p"Wang, Shuling"https://www.zbmath.org/authors/?q=ai:wang.shuling"Yang, Mengfei"https://www.zbmath.org/authors/?q=ai:yang.mengfei"Zhan, Naijun"https://www.zbmath.org/authors/?q=ai:zhan.naijunSummary: It is well known that informal simulation-based design of embedded systems has a low initial cost and delivers early results; yet it cannot guarantee the correctness and reliability of the system to be developed. In contrast, the correctness and reliability of the system can be thoroughly investigated with formal design, but it requires a larger effort, which increases the development cost. Therefore, it is desirable for a designer to move between formal and informal design. This paper describes how to translate Hybrid CSP (HCSP) formal models into Simulink graphical models, so that the models can be simulated and tested using a MATLAB platform, thus avoiding expensive formal verification if the development is at a stage where it is considered unnecessary. Together with our previous work on encoding Simulink/Stateflow diagrams into HCSP, it provides a two-way path in the design of embedded systems, so that the designer can flexibly shift between formal and informal models. The translation from HCSP into Simulink diagrams is implemented as a fully automatic tool, and the correctness of the translation is justified using Unifying Theories of Programming (UTP).
For the entire collection see [Zbl 1355.68010].On logical and extensional characterizations of attributed feature modelshttps://www.zbmath.org/1483.680752022-05-16T20:40:13.078697Z"Damiani, Ferruccio"https://www.zbmath.org/authors/?q=ai:damiani.ferruccio"Lienhardt, Michael"https://www.zbmath.org/authors/?q=ai:lienhardt.michael"Paolini, Luca"https://www.zbmath.org/authors/?q=ai:paolini.lucaSummary: Software-intensive systems can have thousands of interdependent configuration options across different subsystems. Feature models (FMs) allow designers to organize the configuration space by describing configuration options using interdependent features: a feature is a name representing some functionality and each software variant is identified by a set of features. Attributed feature models (AFMs) extend FMs to describe the, possibly constrained, choice of a value from domains such as integers or strings: each attribute is associated to one feature, and when the feature is selected then the attribute brings some additional information relative to the selected features. Different representations of FMs and AFMs have been proposed in the literature. In this paper we focus on the logical representation (which works well in practice) and the extensional representation (which has been recently shown well suited for theoretical investigations). We provide an algebraic and a logical characterization of operations and relations on FMs and AFMs, and we formalize the connection between the two characterizations as monomorphisms from lattices of logical FMs and AFMs to lattices of extensional FMs and AFMs, respectively. This formalization sheds new light on the correspondence between the algebraic and logical characterizations of operations and relations for FMs and AFMs. It aims to foster the development of a formal framework for supporting practical exploitation of future theoretical developments on FMs, AFMs and multi software product lines.Formal semantics of a classical-quantum languagehttps://www.zbmath.org/1483.680762022-05-16T20:40:13.078697Z"Deng, Yuxin"https://www.zbmath.org/authors/?q=ai:deng.yuxin"Feng, Yuan"https://www.zbmath.org/authors/?q=ai:feng.yuanSummary: We investigate the formal semantics of a simple imperative language that has both classical and quantum constructs. More specifically, we provide an operational semantics, a denotational semantics and two Hoare-style proof systems: an abstract one and a concrete one. The two proof systems are satisfaction-based, as inspired by the program logics of Barthe et al. for probabilistic programs. The abstract proof system turns out to be sound and relatively complete, while the concrete one is sound only.Probabilistic analysis based on symbolic game semantics and model countinghttps://www.zbmath.org/1483.680772022-05-16T20:40:13.078697Z"Dimovski, Aleksandar S."https://www.zbmath.org/authors/?q=ai:dimovski.aleksandar-sSummary: Probabilistic program analysis aims to quantify the probability that a given program satisfies a required property. It has many potential applications, from program understanding and debugging to computing program reliability, compiler optimizations and quantitative information flow analysis for security. In these situations, it is usually more relevant to quantify the probability of satisfying/violating a given property than to just assess the possibility of such events to occur.
In this work, we introduce an approach for probabilistic analysis of open programs (i.e. programs with undefined identifiers) based on game semantics and model counting. We use a symbolic representation of algorithmic game semantics to collect the symbolic constraints on the input data (context) that lead to the occurrence of the target events (e.g. satisfaction/violation of a given property). The constraints are then analyzed to quantify how likely is an input to satisfy them. We use model counting techniques to count the number of solutions (from a bounded integer domain) that satisfy given constraints. These counts are then used to assign probabilities to program executions and to assess the probability for the target event to occur at the desired level of confidence. Finally, we present the results of applying our approach to several interesting examples and illustrate the benefits they may offer.
For the entire collection see [Zbl 1436.68017].Neural software vulnerability analysis using rich intermediate graph representations of programshttps://www.zbmath.org/1483.680782022-05-16T20:40:13.078697Z"Ghaffarian, Seyed Mohammad"https://www.zbmath.org/authors/?q=ai:ghaffarian.seyed-mohammad"Shahriari, Hamid Reza"https://www.zbmath.org/authors/?q=ai:shahriari.hamid-rezaSummary: Security vulnerabilities are among the major concerns of modern software engineering. Successful results of machine learning techniques in various challenging applications have led to an emerging field of research to investigate the effectiveness of machine learning, and more recently, deep learning techniques, for the problem of software vulnerability analysis and discovery. In this paper, we explore the utilization of \textit{Graph Neural Networks} as the latest trend and progress in the field of artificial neural networks. To this end, we propose an original neural vulnerability analysis approach, using customized intermediate graph representations of programs to train graph neural network models. Experimental results on a public suite of vulnerable programs show that the proposed approach is effective at the task of software vulnerability analysis. Additional empirical experiments answer complementary research questions about the proposed approach. In particular, we present experimental results for the challenging task of cross-project vulnerability analysis, with interesting insights on the capabilities of our novel approach. Furthermore, a software utility that was developed in the course of this study is also published as a further contribution to the research community.Reversing imperative parallel programshttps://www.zbmath.org/1483.680792022-05-16T20:40:13.078697Z"Hoey, James"https://www.zbmath.org/authors/?q=ai:hoey.james"Ulidowski, Irek"https://www.zbmath.org/authors/?q=ai:ulidowski.irek"Yuen, Shoji"https://www.zbmath.org/authors/?q=ai:yuen.shojiSummary: We propose an approach and a subsequent extension for reversing imperative programs. Firstly, we produce both an augmented version and a corresponding inverted version of the original program. Augmentation saves reversal information into an auxiliary data store, maintaining segregation between this and the program state, while never altering the data store in any other way than that of the original program. Inversion uses this information to revert the final program state to the state as it was before execution. We prove that augmentation and inversion work as intended, and illustrate our approach with several examples. We also suggest a modification to our first approach to support non-communicating parallelism. Execution interleaving introduces a number of challenges, each of which our extended approach considers. We define annotation and redefine inversion to use a sequence of statement identifiers, making the interleaving order deterministic in reverse.
For the entire collection see [Zbl 1433.68033].Termination analysis for offline partial evaluation of a higher order functional languagehttps://www.zbmath.org/1483.680802022-05-16T20:40:13.078697Z"Holst Andersen, Peter"https://www.zbmath.org/authors/?q=ai:holst-andersen.peter"Kehler Holst, Carsten"https://www.zbmath.org/authors/?q=ai:kehler-holst.carstenSummary: One of the remaining problems on the path towards fully automatic partial evaluation is ensuring termination of the specialization phase. In
[the second author, ``Finiteness analysis'', Lect. Notes Comput. Sci. 523, 473--495 (1991; \url{doi:10.1007/3540543961_23})]
we gave a termination analysis which could be applied to partial evaluation of first-order strict languages, using a new result about inductive arguments (loosely: if whenever something grows, something gets smaller then the program will only enter finitely many different states). In this paper we extend this work to cover higher-order functional languages. We take an operational approach to the problem and consider the closure representation of higher-order functions to perform a combined data- and control-dependency analysis. The result of this analysis is then used, as in the first-order case, to decide which arguments need to be dynamic to guarantee termination of partial evaluation of the analysed program. The new methods have been tested on a variety of programs, and will be incorporated in a future release of the Similix partial evaluator for Scheme.
For the entire collection see [Zbl 0855.68002].A new roadmap for linking theories of programminghttps://www.zbmath.org/1483.680812022-05-16T20:40:13.078697Z"Jifeng, He"https://www.zbmath.org/authors/?q=ai:jifeng.heSummary: Formal methods advocate the crucial role played by the algebraic approach in specification and implementation of programs. Traditionally, a top-down approach (with denotational model as its origin) links the algebra of programs with the denotational representation by establishment of the soundness and completeness of the algebra against the given model, while a bottom-up approach (a journey started from operational model) introduces a variety of bisimulations to establish the equivalence relation among programs, and then presents a set of algebraic laws in support of program analysis and verification. This paper proposes a new roadmap for linking theories of programming. Our approach takes an algebra of programs as its foundation, and generates both denotational and operational representations from the algebraic refinement relation.
For the entire collection see [Zbl 1355.68010].Refined environment classifiers. Type- and scope-safe code generation with mutable cellshttps://www.zbmath.org/1483.680822022-05-16T20:40:13.078697Z"Kiselyov, Oleg"https://www.zbmath.org/authors/?q=ai:kiselyov.oleg"Kameyama, Yukiyoshi"https://www.zbmath.org/authors/?q=ai:kameyama.yukiyoshi"Sudo, Yuto"https://www.zbmath.org/authors/?q=ai:sudo.yutoSummary: Generating high-performance code and applying typical optimizations within the bodies of loops and functions involves moving or storing open code for later use, often in a different binding environment. There are ample opportunities for variables being left unbound or accidentally captured. It has been a tough challenge to statically ensure that by construction the generated code is nevertheless well-typed and well-scoped: all free variables in manipulated and stored code fragments shall eventually be bound, by their intended binders.
We present the calculus for code generation with mutable state that for the first time achieves type-safety and hygiene without ad hoc restrictions. The calculus strongly resembles region-based memory management, but with the orders of magnitude simpler proofs. It employs the rightly abstract representation for free variables, which, like hypothesis in natural deduction, are free from the bureaucracy of syntax imposed by the type environment or numbering conventions.
Although the calculus was designed for the sake of formalization and is deliberately bare-bone, it turns out easily implementable and not too bothersome for writing realistic program.
For the entire collection see [Zbl 1347.68009].Termination of a class of nonlinear loop programshttps://www.zbmath.org/1483.680832022-05-16T20:40:13.078697Z"Li, Jun"https://www.zbmath.org/authors/?q=ai:li.jun.6|li.jun.7|li.jun.11|li.jun.10|li.jun|li.jun.3|li.jun.8|li.jun.2|li.jun.13|li.jun.12|li.jun.1|li.jun.14"Li, Yi"https://www.zbmath.org/authors/?q=ai:li.yi.1|li.yi.5|li.yi.3|li.yi.4|li.yi.2|li.yi"Feng, Yong"https://www.zbmath.org/authors/?q=ai:feng.yong(no abstract)NHPP software reliability model considering the uncertainty of operating environments with imperfect debugging and testing coveragehttps://www.zbmath.org/1483.680842022-05-16T20:40:13.078697Z"Li, Qiuying"https://www.zbmath.org/authors/?q=ai:li.qiuying"Pham, Hoang"https://www.zbmath.org/authors/?q=ai:pham.hoangSummary: In this paper, we propose a testing-coverage software reliability model that considers not only the imperfect debugging (ID) but also the uncertainty of operating environments based on a non-homogeneous Poisson process (NHPP). Software is usually tested in a given control environment, but it may be used in different operating environments by different users, which are unknown to the developers. Many NHPP software reliability growth models (SRGMs) have been developed to estimate the software reliability measures, but most of the underlying common assumptions of these models are that the operating environment is the same as the developing environment. But in fact, due to the unpredictability of the uncertainty in the operating environments for the software, environments may considerably influence the reliability and software's performance in an unpredictable way. So when a software system works in a field environment, its reliability is usually different from the theory reliability, and also from all its similar applications in other fields. In this paper, a new model is proposed with the consideration of the fault detection rate based on the testing coverage and examined to cover ID subject to the uncertainty of operating environments. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real software failure data based on seven criteria. Improved normalized criteria distance (NCD) method is also used to rank and select the best model in the context of a set of goodness-of-fit criteria taken all together. All results demonstrate that the new model can give a significant improved goodness-of-fit and predictive performance. Finally, the optimal software release time based on cost and reliability requirement and its sensitivity analysis are discussed.Termination analysis of a class of initialized linear loopshttps://www.zbmath.org/1483.680852022-05-16T20:40:13.078697Z"Li, Yi"https://www.zbmath.org/authors/?q=ai:li.yi.1|li.yi.5|li.yi.2|li.yi|li.yi.4|li.yi.3"Li, Chuan Can"https://www.zbmath.org/authors/?q=ai:li.chuan-can"Wu, Wen Yuan"https://www.zbmath.org/authors/?q=ai:wu.wenyuan(no abstract)A discrete geometric model of concurrent program executionhttps://www.zbmath.org/1483.680862022-05-16T20:40:13.078697Z"Möller, Bernhard"https://www.zbmath.org/authors/?q=ai:moller.bernhard"Hoare, Tony"https://www.zbmath.org/authors/?q=ai:hoare.c-a-r-tony"Müller, Martin E."https://www.zbmath.org/authors/?q=ai:muller.martin-eric"Struth, Georg"https://www.zbmath.org/authors/?q=ai:struth.georgSummary: A trace of the execution of a concurrent object-oriented program can be displayed in two-dimensions as a diagram of a non-metric finite geometry. The actions of a programs are represented by points, its objects and threads by vertical lines, its transactions by horizontal lines, its communications and resource sharing by sloping arrows, and its partial traces by rectangular figures.
We prove informally that the geometry satisfies the laws of Concurrent Kleene Algebra (CKA); these describe and justify the interleaved implementation of multithreaded programs on computer systems with a lesser number of concurrent processors. More familiar forms of semantics (e.g., verification-oriented and operational) can be derived from CKA.
Programs are represented as sets of all their possible traces of execution, and non-determinism is introduced as union of these sets. The geometry is extended to multiple levels of abstraction and granularity; a method call at a higher level can be modelled by a specification of the method body, which is implemented at a lower level.
The final section describes how the axioms and definitions of the geometry have been encoded in the interactive proof tool Isabelle, and reports on progress towards automatic checking of the proofs in the paper.
For the entire collection see [Zbl 1355.68010].UTP semantics of reactive processes with continuationshttps://www.zbmath.org/1483.680872022-05-16T20:40:13.078697Z"Ngondi, Gerard Ekembe"https://www.zbmath.org/authors/?q=ai:ngondi.gerard-ekembe"Woodcock, Jim"https://www.zbmath.org/authors/?q=ai:woodcock.james-c-pSummary: Based on the Unifying Theories of Programming (UTP) semantic framework, Hoare and He have defined (a means for constructing) a high-level language with labels and jumps, using the concept of continuations. The language permits placing labels at given points within a program and making jumps to these labels when desired. In their work, Hoare and He have limited themselves to the definition of continuations for sequential programs. This paper is concerned with the extension of that work to reactive programs. We first extend their results to include parallelism and higher order programs. This is achieved by designing a new control variable \(\mathcal {L}\) whose value follows the parallel structure of programs. We then proceed to define reactive (CSP) processes that contain the new control variable \(\mathcal {L}\), resulting in the theory of reactive (process) blocks. The encapsulation operator defined by Hoare and He and which may also be used for hiding the control variable \(\mathcal {L}\) does readily provide a (functional) link between both UTP theories of reactive processes and of reactive blocks. The semantics are denotational.
For the entire collection see [Zbl 1355.68010].Weighted propositional configuration logics: a specification language for architectures with quantitative featureshttps://www.zbmath.org/1483.680882022-05-16T20:40:13.078697Z"Paraponiari, Paulina"https://www.zbmath.org/authors/?q=ai:paraponiari.paulina"Rahonis, George"https://www.zbmath.org/authors/?q=ai:rahonis.georgeSummary: We introduce and investigate a weighted propositional configuration logic over commutative semirings. Our logic is intended to serve as a specification language for software architectures with quantitative features. We prove an efficient construction of full normal forms and decidability of equivalence of formulas in this logic. We illustrate the motivation of this work by describing well-known architectures equipped with quantitative characteristics using formulas in our logic.DOM: specification and client reasoninghttps://www.zbmath.org/1483.680892022-05-16T20:40:13.078697Z"Raad, Azalea"https://www.zbmath.org/authors/?q=ai:raad.azalea"Santos, José Fragoso"https://www.zbmath.org/authors/?q=ai:santos.jose-fragoso"Gardner, Philippa"https://www.zbmath.org/authors/?q=ai:gardner.philippa-anneSummary: We present an axiomatic specification of a key fragment of DOM using structural separation logic. This specification allows us to develop modular reasoning about client programs that call the DOM.
For the entire collection see [Zbl 1347.68009].A stepwise approach to linking theorieshttps://www.zbmath.org/1483.680902022-05-16T20:40:13.078697Z"Ribeiro, Pedro"https://www.zbmath.org/authors/?q=ai:ribeiro.pedro"Cavalcanti, Ana"https://www.zbmath.org/authors/?q=ai:cavalcanti.ana"Woodcock, Jim"https://www.zbmath.org/authors/?q=ai:woodcock.james-c-pSummary: Formal modelling of complex systems requires catering for a variety of aspects. The Unifying Theories of Programming (UTP) distinguishes itself as a semantic framework that promotes unification of results across different modelling paradigms via linking functions. The naive composition of theories, however, may yield unexpected or undesirable semantic models. Here, we propose a stepwise approach to linking theories where we deal separately with the definition of the relation between the variables in the different theories and the identification of healthiness conditions. We explore this approach by deriving healthiness conditions for Circus Time via calculation, based on the healthiness conditions of CSP and a small set of principles underlying the timed model.
For the entire collection see [Zbl 1355.68010].Formal verification of parallel prefix sum and stream compaction algorithms in CUDAhttps://www.zbmath.org/1483.680912022-05-16T20:40:13.078697Z"Safari, Mohsen"https://www.zbmath.org/authors/?q=ai:safari.mohsen"Huisman, Marieke"https://www.zbmath.org/authors/?q=ai:huisman.mariekeSummary: GPUs are an important part of any High Performance Computing (HPC) architecture. To make optimal use of the specifics of a GPU architecture, we need programming models that naturally support the parallel execution model of a GPU. CUDA and OpenCL are two widely used examples of such programming models. Furthermore, we also need to redesign algorithms such that they adhere to this parallel programming model, and we need to be able to prove the correctness of these redesigned algorithms.
In this paper we study two examples of such parallelized algorithms, and we discuss how to prove their correctness (data race freedom and (partial) functional correctness) using the VerCors program verifier. First of all, we prove the correctness of two parallel algorithms solving the prefix sum problem. Second, we show how such a prefix sum algorithm is used as a basic block in a stream compaction algorithm, and we prove correctness of this stream compaction algorithm, taking advantage of the earlier correctness proof for the prefix sum algorithm.
The proofs as described in this paper are developed over the CUDA implementations of these algorithms. In earlier work, we had already shown correctness of a more high-level version of the algorithm. This paper discusses how we add support to reason about CUDA programs in VerCors, and it then shows how we can redo the verification at the level of the CUDA code. We also discuss some practical challenges that we had to address to prove correctness of the actual CUDA-level verifications.Termination analysis of polynomial programs by finite differencehttps://www.zbmath.org/1483.680922022-05-16T20:40:13.078697Z"Yu, Wei"https://www.zbmath.org/authors/?q=ai:yu.wei"Feng, Yong"https://www.zbmath.org/authors/?q=ai:feng.yong(no abstract)An axiomatic value model for Isabelle/UTPhttps://www.zbmath.org/1483.680932022-05-16T20:40:13.078697Z"Zeyda, Frank"https://www.zbmath.org/authors/?q=ai:zeyda.frank"Foster, Simon"https://www.zbmath.org/authors/?q=ai:foster.simon"Freitas, Leo"https://www.zbmath.org/authors/?q=ai:freitas.leoSummary: The Unifying Theories of Programming (UTP) is a mathematical framework to define, examine and link program semantics for a large variety of computational paradigms. Several mechanisations of the UTP in HOL theorem provers have been developed. All of them, however, succumb to a trade off in how they encode the value model of UTP theories. A deep and unified value model via a universal (data)type incurs restrictions on permissible value types and adds complexity; a value model directly instantiating HOL types for UTP values retains simplicity, but sacrifices expressiveness, since we lose the ability to compositionally reason about alphabets and theories. We here propose an alternative solution that axiomatises the value model and retains the advantages of both approaches. We carefully craft a definitional mechanism in the Isabelle/HOL prover that guarantees soundness.
For the entire collection see [Zbl 1355.68010].Learning a strategy for choosing widening thresholds from a large codebasehttps://www.zbmath.org/1483.680942022-05-16T20:40:13.078697Z"Cha, Sooyoung"https://www.zbmath.org/authors/?q=ai:cha.sooyoung"Jeong, Sehun"https://www.zbmath.org/authors/?q=ai:jeong.sehun"Oh, Hakjoo"https://www.zbmath.org/authors/?q=ai:oh.hakjooSummary: In numerical static analysis, the technique of widening thresholds is essential for improving the analysis precision, but blind uses of the technique often significantly slow down the analysis. Ideally, an analysis should apply the technique only when it benefits, by carefully choosing thresholds that contribute to the final precision. However, finding the proper widening thresholds is nontrivial and existing syntactic heuristics often produce suboptimal results. In this paper, we present a method that automatically learns a good strategy for choosing widening thresholds from a given codebase. A notable feature of our method is that a good strategy can be learned with analyzing each program in the codebase only once, which allows to use a large codebase as training data. We evaluated our technique with a static analyzer for full C and 100 open-source benchmarks. The experimental results show that the learned widening strategy is highly cost-effective; it achieves 84\% of the full precision while increasing the baseline analysis cost only by \(1.4{\times}\). Our learning algorithm is able to achieve this performance 26 times faster than the previous Bayesian optimization approach.
For the entire collection see [Zbl 1347.68009].AUSPICE-R: automatic safety-property proofs for realistic features in machine codehttps://www.zbmath.org/1483.680952022-05-16T20:40:13.078697Z"Tan, Jiaqi"https://www.zbmath.org/authors/?q=ai:tan.jiaqi"Tay, Hui Jun"https://www.zbmath.org/authors/?q=ai:tay.hui-jun"Gandhi, Rajeev"https://www.zbmath.org/authors/?q=ai:gandhi.rajeev"Narasimhan, Priya"https://www.zbmath.org/authors/?q=ai:narasimhan.priyaSummary: Automatically generating proofs of safety properties for software is important as software becomes safety-critical, e.g., in medical devices and automobiles. While current techniques can automatically prove safety properties for machine code, they either: (i) do not support user-mode programs in an operating system, (ii) do not support realistic program features such as system calls, or (iii) have been demonstrated only on programs of limited sizes. We present AUSPICE-R, which automates safety-property proof generation for user-mode ARM machine code containing system calls, and greatly improves the scalability of automated safety-property proof generation. AUSPICE-R uses an axiomatic approach to model system calls, and leverages idioms in compiled code to optimize its proof automation. We demonstrate AUSPICE-R on (i) simple working versions of common text utilities that perform I/O, and (ii) embedded programs for the Raspberry Pi single-board-computer containing hardware I/O. AUSPICE-R automatically proves safety up to \(12\times\) faster, and supports programs \(3\times\) larger, than prior techniques.
For the entire collection see [Zbl 1347.68009].A linear potential function for pairing heapshttps://www.zbmath.org/1483.680962022-05-16T20:40:13.078697Z"Iacono, John"https://www.zbmath.org/authors/?q=ai:iacono.john"Yagnatinsky, Mark"https://www.zbmath.org/authors/?q=ai:yagnatinsky.markSummary: We present the first potential function for pairing heaps with linear range. This implies that the runtime of a short sequence of operations is faster than previously known. It is also simpler than the only other potential function known to give constant amortized time for insertion.
For the entire collection see [Zbl 1377.68004].Amortized efficiency of ranking and unranking left-child sequences in lexicographic orderhttps://www.zbmath.org/1483.680972022-05-16T20:40:13.078697Z"Pai, Kung-Jui"https://www.zbmath.org/authors/?q=ai:pai.kung-jui"Wu, Ro-Yu"https://www.zbmath.org/authors/?q=ai:wu.ro-yu"Chang, Jou-Ming"https://www.zbmath.org/authors/?q=ai:chang.jou-ming"Chang, Shun-Chieh"https://www.zbmath.org/authors/?q=ai:chang.shun-chiehSummary: A new type of sequences called left-child sequences (LC-sequences for short) was recently introduced by
\textit{R.-Y. Wu} et al. [Theor. Comput. Sci. 556, 25--33 (2014; Zbl 1338.68282)]
for representing binary trees. In particular, they pointed out that such sequences have a natural interpretation from the view point of data structure and gave a characterization of them. Based on such a characterization, there is an algorithm to generate all LC-sequences of binary trees with \(n\) internal nodes in lexicographic order. In this paper, we extend our study to the ranking and unranking problems. By integrating a measure called ``left distances'' introduced by Mäkinen [8] to represent binary trees, we develop efficient ranking and unranking algorithms for LC-sequences in lexicographic order. With a help of aggregate analysis, we show that both ranking and unranking algorithms can be run in amortized cost of \(\mathcal {O}(n)\) time and space.
For the entire collection see [Zbl 1377.68004].Hilbert-style axiomatization for hybrid XPath with datahttps://www.zbmath.org/1483.680982022-05-16T20:40:13.078697Z"Areces, Carlos"https://www.zbmath.org/authors/?q=ai:areces.carlos"Fervari, Raul"https://www.zbmath.org/authors/?q=ai:fervari.raulSummary: In this paper we introduce a sound and complete axiomatization for XPath with data constraints extended with hybrid operators. First, we define \(\mathrm{HXPath}_=({{\uparrow }{\downarrow }})\), an extension of vertical XPath with nominals and the hybrid operator @. Then, we introduce an axiomatic system for \(\mathrm{HXPath}_=({{\uparrow }{\downarrow }})\), and we prove it is complete with respect to the class of abstract data trees, i.e., data trees in which data values are abstracted as equivalence relations. As a corollary, we also obtain completeness with respect to the class of concrete data trees.
For the entire collection see [Zbl 1350.68015].Investigations on path indexing for graph databaseshttps://www.zbmath.org/1483.680992022-05-16T20:40:13.078697Z"Sumrall, Jonathan M."https://www.zbmath.org/authors/?q=ai:sumrall.jonathan-m"Fletcher, George H. L."https://www.zbmath.org/authors/?q=ai:fletcher.george-h-l"Poulovassilis, Alexandra"https://www.zbmath.org/authors/?q=ai:poulovassilis.alexandra"Svensson, Johan"https://www.zbmath.org/authors/?q=ai:svensson.johan"Vejlstrup, Magnus"https://www.zbmath.org/authors/?q=ai:vejlstrup.magnus"Vest, Chris"https://www.zbmath.org/authors/?q=ai:vest.chris"Webber, Jim"https://www.zbmath.org/authors/?q=ai:webber.jimSummary: Graph databases have become an increasingly popular choice for the management of the massive network data sets arising in many contemporary applications. We investigate the effectiveness of path indexing for accelerating query processing in graph database systems, using as an exemplar the widely used open-source Neo4j graph database. We present a novel path index design which supports efficient ordered access to paths in a graph dataset. Our index is fully persistent and designed for external memory storage and retrieval. We also describe a compression scheme that exploits the limited differences between consecutive keys in the index, as well as a workload-driven approach to indexing. We demonstrate empirically the speed-ups achieved by our implementation, showing that the path index yields query run-times from 2x up to 8000x faster than Neo4j. Empirical evaluation also shows that our scheme leads to smaller indexes than using general-purpose LZ4 compression. The complete stand-alone implementation of our index, as well as supporting tooling such as a bulk-loader, are provided as open source for further research and development.
For the entire collection see [Zbl 1364.68008].The privacy blanket of the shuffle modelhttps://www.zbmath.org/1483.681002022-05-16T20:40:13.078697Z"Balle, Borja"https://www.zbmath.org/authors/?q=ai:balle.borja"Bell, James"https://www.zbmath.org/authors/?q=ai:bell.james-j|bell.james-r|bell.james-f|bell.james-h.1"Gascón, Adrià"https://www.zbmath.org/authors/?q=ai:gascon.adria"Nissim, Kobbi"https://www.zbmath.org/authors/?q=ai:nissim.kobbiSummary: This work studies differential privacy in the context of the recently proposed shuffle model. Unlike in the local model, where the server collecting privatized data from users can track back an input to a specific user, in the shuffle model users submit their privatized inputs to a server anonymously. This setup yields a trust model which sits in between the classical curator and local models for differential privacy. The shuffle model is the core idea in the encode, shuffle, analyze (ESA) model introduced by
\textit{A. Bittau} et al. [``Prochlo: strong privacy for analytics in the crowd'', in: Proceedings of the 26th symposium on operating systems principles, SOPS'17. New York, NY: Association for Computing Machinery (ACM). 441--459 (2017; \url{doi:10.1145/3132747.3132769})].
Recent work by
\textit{A. Cheu} et al. [Lect. Notes Comput. Sci. 11476, 375--403 (2019; Zbl 1470.94081)]
analyzes the differential privacy properties of the shuffle model and shows that in some cases shuffled protocols provide strictly better accuracy than local protocols. Additionally,
\textit{Ú. Erlingsson} et al. [in: Proceedings of the 30th annual ACM-SIAM symposium on discrete algorithms, SODA'19. Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM); New York, NY: Association for Computing Machinery (ACM). 2468--2479 (2019; Zbl 1432.68133)]
provide a privacy amplification bound quantifying the level of curator differential privacy achieved by the shuffle model in terms of the local differential privacy of the randomizer used by each user.
In this context, we make three contributions. First, we provide an optimal single message protocol for summation of real numbers in the shuffle model. Our protocol is very simple and has better accuracy and communication than the protocols for this same problem proposed by Cheu et al. [loc. cit.]. Optimality of this protocol follows from our second contribution, a new lower bound for the accuracy of private protocols for summation of real numbers in the shuffle model. The third contribution is a new amplification bound for analyzing the privacy of protocols in the shuffle model in terms of the privacy provided by the corresponding local randomizer. Our amplification bound generalizes the results by Erlingsson et al. [loc. cit.] to a wider range of parameters, and provides a whole family of methods to analyze privacy amplification in the shuffle model.
For the entire collection see [Zbl 1428.94005].Differentially private data publishing for arbitrarily partitioned datahttps://www.zbmath.org/1483.681012022-05-16T20:40:13.078697Z"Wang, Rong"https://www.zbmath.org/authors/?q=ai:wang.rong"Fung, Benjamin C. M."https://www.zbmath.org/authors/?q=ai:fung.benjamin-c-m"Zhu, Yan"https://www.zbmath.org/authors/?q=ai:zhu.yan"Peng, Qiang"https://www.zbmath.org/authors/?q=ai:peng.qiangSummary: Many models have been proposed to preserve data privacy for different data publishing scenarios. Among these models, \(\in\)-differential privacy is receiving increasing attention because it does not make assumptions about adversaries' prior knowledge and can provide a rigorous privacy guarantee. Although there are numerous proposed approaches using \(\in\)-differential privacy to publish centralized data of a single-party, differentially private data publishing for distributed data among multiple parties has not been studied extensively. The challenge in releasing distributed data is how to protect privacy and integrity during collaborative data integration and anonymization. In this paper, we present the first differentially private solution to anonymize data from two parties with \textit{arbitrarily partitioned data} in a semi-honest model. We aim at satisfying two privacy requirements: (1) the collaborative anonymization should satisfy differential privacy; (2) one party cannot learn extra information about the other party's data except for the final result and the information that can be inferred from the result. To meet these privacy requirements, we propose a distributed differentially private anonymization algorithm and guarantee that each step of the algorithm satisfies the definition of secure two-party computation. In addition to the security and cost analyses, we demonstrate the utility of our algorithm in classification analysis.A weighted K-member clustering algorithm for K-anonymizationhttps://www.zbmath.org/1483.681022022-05-16T20:40:13.078697Z"Yan, Yan"https://www.zbmath.org/authors/?q=ai:yan.yan"Herman, Eyeleko Anselme"https://www.zbmath.org/authors/?q=ai:herman.eyeleko-anselme"Mahmood, Adnan"https://www.zbmath.org/authors/?q=ai:mahmood.adnan"Feng, Tao"https://www.zbmath.org/authors/?q=ai:feng.tao"Xie, Pengshou"https://www.zbmath.org/authors/?q=ai:xie.pengshouSummary: As a representative model for privacy preserving data publishing, K-anonymity has raised a considerable number of questions for researchers over the past few decades. Among them, how to achieve data release without sacrificing the users' privacy and how to maximize the availability of published data is the ultimate goal of privacy preserving data publishing. In order to enhance the clustering effect and reduce the unnecessary computation, this paper proposes a weighted K-member clustering algorithm. A series of weight indicators are designed to evaluate the outlyingness of records, distance between records, and information loss of the published data. The proposed algorithm can reduce the influence of outliers on the clustering effect and maintain the availability of data to the best possible extent during the clustering process. Experimental analysis suggests that the proposed method generates lower information loss, improves the clustering effect, and is less sensitive to outliers as compared with some existing methods.Output compression, MPC, and iO for Turing machineshttps://www.zbmath.org/1483.681032022-05-16T20:40:13.078697Z"Badrinarayanan, Saikrishna"https://www.zbmath.org/authors/?q=ai:badrinarayanan.saikrishna"Fernando, Rex"https://www.zbmath.org/authors/?q=ai:fernando.rex"Koppula, Venkata"https://www.zbmath.org/authors/?q=ai:koppula.venkata"Sahai, Amit"https://www.zbmath.org/authors/?q=ai:sahai.amit"Waters, Brent"https://www.zbmath.org/authors/?q=ai:waters.brentSummary: In this work, we study the fascinating notion of output-compressing randomized encodings for Turing machines, in a shared randomness model. In this model, the encoder and decoder have access to a shared random string, and the efficiency requirement is, the size of the encoding must be independent of the running time and output length of the Turing machine on the given input, while the length of the shared random string is allowed to grow with the length of the output. We show how to construct output-compressing randomized encodings for Turing machines in the shared randomness model, assuming iO for circuits and any assumption in the set \{LWE, DDH, \(\mathrm{N}^\mathrm{th}\) Residuosity\}.
We then show interesting implications of the above result to basic feasibility questions in the areas of secure multiparty computation (MPC) and indistinguishability obfuscation (iO):
\par 1.) Compact MPC for Turing machines in the random oracle model. In the context of MPC, we consider the following basic feasibility question: does there exist a malicious-secure MPC protocol for Turing machines whose communication complexity is independent of the running time and output length of the Turing machine when executed on the combined inputs of all parties? We call such a protocol as a compact MPC protocol. \textit{P. Hubáček} and \textit{D. Wichs} [in: Proceedings of the 6th conference on innovations in theoretical computer science, ITCS'15. New York, NY: Association for Computing Machinery (ACM). 163--172 (2015; Zbl 1364.68201)] showed via an incompressibility argument, that, even for the restricted setting of circuits, it is impossible to construct a malicious secure two party computation protocol in the plain model where the communication complexity is independent of the output length. In this work, we show how to evade this impossibility by compiling any (noncompact) MPC protocol in the plain model to a compact MPC protocol for Turing machines in the random oracle model, assuming output-compressing randomized encodings in the shared randomness model.
\par 2.) Succinct iO for Turing machines in the shared randomness model. In all existing constructions of iO for Turing machines, the size of the obfuscated program grows with a bound on the input length. In this work, we show how to construct an iO scheme for Turing machines in the shared randomness model where the size of the obfuscated program is independent of a bound on the input length, assuming iO for circuits and any assumption in the set \{LWE, DDH, \(\mathrm{N}^\mathrm{th}\) Residuosity\}.
For the entire collection see [Zbl 1428.94008].Tractable inference in credal sentential decision diagramshttps://www.zbmath.org/1483.681042022-05-16T20:40:13.078697Z"Mattei, Lilith"https://www.zbmath.org/authors/?q=ai:mattei.lilith"Antonucci, Alessandro"https://www.zbmath.org/authors/?q=ai:antonucci.alessandro"Mauá, Denis Deratani"https://www.zbmath.org/authors/?q=ai:maua.denis-deratani"Facchini, Alessandro"https://www.zbmath.org/authors/?q=ai:facchini.alessandro"Villanueva Llerena, Julissa"https://www.zbmath.org/authors/?q=ai:villanueva-llerena.julissaSummary: \textit{Probabilistic sentential decision diagrams} are logic circuits where the inputs of disjunctive gates are annotated by probability values. They allow for a compact representation of joint probability mass functions defined over sets of Boolean variables, that are also consistent with the logical constraints defined by the circuit. The probabilities in such a model are usually ``learned'' from a set of observations. This leads to overconfident and prior-dependent inferences when data are scarce, unreliable or conflicting. In this work, we develop the \textit{credal sentential decision diagrams}, a generalisation of their probabilistic counterpart that allows for replacing the local probabilities with (so-called \textit{credal}) sets of mass functions. These models induce a joint credal set over the set of Boolean variables, that sharply assigns probability zero to states inconsistent with the logical constraints. Three inference algorithms are derived for these models. These allow to compute: (i) the lower and upper probabilities of an observation for an arbitrary number of variables; (ii) the lower and upper conditional probabilities for the state of a single variable given an observation; (iii) whether or not all the probabilistic sentential decision diagrams compatible with the credal specification have the same most probable explanation of a given set of variables given an observation of the other variables. These inferences are \textit{tractable}, as all the three algorithms, based on bottom-up traversal with local linear programming tasks on the disjunctive gates, can be solved in polynomial time with respect to the circuit size. The first algorithm is always exact, while the remaining two might induce a conservative (outer) approximation in the case of multiply connected circuits. A semantics for this approximation together with an auxiliary algorithm able to decide whether or not the result is exact is also provided together with a brute-force characterization of the exact inference in these cases. For a first empirical validation, we consider a simple application based on noisy seven-segment display images. The credal models are observed to properly distinguish between easy and hard-to-detect instances and outperform other generative models not able to cope with logical constraints.Simulating R systems by P systemshttps://www.zbmath.org/1483.681052022-05-16T20:40:13.078697Z"Alhazov, Artiom"https://www.zbmath.org/authors/?q=ai:alhazov.artiom"Aman, Bogdan"https://www.zbmath.org/authors/?q=ai:aman.bogdan"Freund, Rudolf"https://www.zbmath.org/authors/?q=ai:freund.rudolf"Ivanov, Sergiu"https://www.zbmath.org/authors/?q=ai:ivanov.sergiuSummary: We show multiple ways of how to simulate R systems by non-cooperative P systems with atomic control by promoters and/or inhibitors, or with matter/antimatter annihilation rules, with a slowdown by a constant factor only. The descriptional complexity of the simulating P systems is also linear with respect to that of the simulated R system. All constants depend on how general the model of R systems is, as well as on the chosen control ingredients of the P systems. Special attention is paid to the differences in the mode of rule application in these models.
For the entire collection see [Zbl 1358.68015].Purely catalytic P systems over integers and their generative powerhttps://www.zbmath.org/1483.681062022-05-16T20:40:13.078697Z"Alhazov, Artiom"https://www.zbmath.org/authors/?q=ai:alhazov.artiom"Belingheri, Omar"https://www.zbmath.org/authors/?q=ai:belingheri.omar"Freund, Rudolf"https://www.zbmath.org/authors/?q=ai:freund.rudolf"Ivanov, Sergiu"https://www.zbmath.org/authors/?q=ai:ivanov.sergiu"Porreca, Antonio E."https://www.zbmath.org/authors/?q=ai:porreca.antonio-e"Zandron, Claudio"https://www.zbmath.org/authors/?q=ai:zandron.claudioSummary: We further investigate the computing power of the recently introduced P systems with \(\mathbb Z\)-multisets (also known as hybrid sets) as generative devices. These systems apply catalytic rules in the maximally parallel way, even consuming absent non-catalysts, thus effectively generating vectors of arbitrary (not just non-negative) integers. The rules may only be made inapplicable by dissolution rules. However, this releases the catalysts into the immediately outer region, where new rules might become applicable to them. We discuss the generative power of this model. Finally, we consider the variant with mobile catalysts.
For the entire collection see [Zbl 1358.68015].(Tissue) P systems with vesicles of multisetshttps://www.zbmath.org/1483.681072022-05-16T20:40:13.078697Z"Alhazov, Artiom"https://www.zbmath.org/authors/?q=ai:alhazov.artiom"Freund, Rudolf"https://www.zbmath.org/authors/?q=ai:freund.rudolf"Ivanov, Sergiu"https://www.zbmath.org/authors/?q=ai:ivanov.sergiu"Verlan, Sergey"https://www.zbmath.org/authors/?q=ai:verlan.sergeySummary: We consider tissue P systems working on vesicles of multisets with the very simple operations of insertion, deletion, and substitution of single objects. With the whole multiset being enclosed in a vesicle, sending it to a target cell can be indicated in those simple rules working on the multiset. As derivation modes we consider the sequential mode, where exactly one rule is applied in a derivation step, and the set maximal mode, where in each derivation step a non-extendable set of rules is applied. With the set maximal mode, computational completeness can already be obtained with tissue P systems having a tree structure, whereas tissue P systems even with an arbitrary communication structure are not computationally complete when working in the sequential mode. Adding polarizations -- \(-1\), \(0\), \(1\) are sufficient -- allows for obtaining computational completeness even for tissue P systems working in the sequential mode.
For the entire collection see [Zbl 1433.68013].P systems working in maximal variants of the set derivation modehttps://www.zbmath.org/1483.681082022-05-16T20:40:13.078697Z"Alhazov, Artiom"https://www.zbmath.org/authors/?q=ai:alhazov.artiom"Freund, Rudolf"https://www.zbmath.org/authors/?q=ai:freund.rudolf"Verlan, Sergey"https://www.zbmath.org/authors/?q=ai:verlan.sergeySummary: In P systems working in the set derivation mode, even in the maximally parallel derivation mode, rules are only applied in at most one copy in each derivation step. We also consider the set mode in the cases of taking those sets of rules with the maximal number of applicable rules or with affecting the maximal number of objects. For many variants of P systems, the computational completeness proofs even literally still hold true for these new set derivation modes. On the other hand, we obtain new results for P systems using target selection for the rules to be chosen together with these set derivation modes.
For the entire collection see [Zbl 1358.68015].Computational power of protein networkshttps://www.zbmath.org/1483.681092022-05-16T20:40:13.078697Z"Aman, Bogdan"https://www.zbmath.org/authors/?q=ai:aman.bogdan"Ciobanu, Gabriel"https://www.zbmath.org/authors/?q=ai:ciobanu.gabrielSummary: Cell biology provides useful ideas to computer scientists in order to construct models which can provide more efficient computations. In this paper we prove that an abstract model of protein-protein interaction derived from membrane computing has the same computational power as a Turing machine by using a rather small number of proteins having at most length two, where length is an abstract measure of complexity.
For the entire collection see [Zbl 1358.68015].Applications of P systems in population biology and ecology: the cases of MPP and APP systemshttps://www.zbmath.org/1483.681102022-05-16T20:40:13.078697Z"Barbuti, Roberto"https://www.zbmath.org/authors/?q=ai:barbuti.roberto"Bove, Pasquale"https://www.zbmath.org/authors/?q=ai:bove.pasquale"Milazzo, Paolo"https://www.zbmath.org/authors/?q=ai:milazzo.paolo"Pardini, Giovanni"https://www.zbmath.org/authors/?q=ai:pardini.giovanniSummary: We describe two extensions of P systems for the modelling of populations and ecosystems. They are the minimal probabilistic P systems (MPP systems) and the attributed probabilistic P systems (APP systems). We describe also two case studies in which the two formalisms have been applied to the study of real ecological systems. The first case study deals with the causes of the stability of European hybrid populations of water frogs. The second case study deals with social interactions and the establishment of dominance hierarchies in primates.
For the entire collection see [Zbl 1358.68015].Chemical term reduction with active P systemshttps://www.zbmath.org/1483.681112022-05-16T20:40:13.078697Z"Battyányi, Péter"https://www.zbmath.org/authors/?q=ai:battyanyi.peter"Vaszil, György"https://www.zbmath.org/authors/?q=ai:vaszil.gyorgySummary: We present an attempt to translate the terms of a variant of the \(\gamma\)-calculus into membrane systems, such that the reduction sequences in the calculus are simulated by the computation of the corresponding membrane system. Finding such translations could help to reason about properties of membrane computations and properties of the reduction sequences of chemical calculi in terms of each other. By translating chemical computing formalisms to membrane systems, we might also obtain a high-level programming language for P systems which could also serve as an elegant and efficient way of presenting P system algorithms.
For the entire collection see [Zbl 1358.68015].The evolutionary resilience of distributed cellular computinghttps://www.zbmath.org/1483.681122022-05-16T20:40:13.078697Z"Cavaliere, Matteo"https://www.zbmath.org/authors/?q=ai:cavaliere.matteo"Sanchez, Alvaro"https://www.zbmath.org/authors/?q=ai:sanchez.alvaro-iSummary: Individual cells process environmental information relevant to their functions using biochemical processes and signalling networks that implement a flow of information from the extracellular environment, across the cell membrane to the cytoplasm in which the actual cellular computation takes place (in the form of gene expression). In many cases, the environmental information to be processed are either molecules produced by other cells or shared extracellular molecules -- in this case the processing of the environmental information is a distributed, highly parallel computing process, in which cells must synchronize, coordinate and cooperate. While the ability of cells to cooperate can increase their overall computational power, it also raises an evolutionary stability issue -- population of cooperating cells are at risk of cheating cells invasions, cells that do not cooperate but exploit the benefits of the population. The bridge between membrane computing (as a mathematical formalization of cellular computing) and evolutionary dynamics (as mathematical formalization of natural selection) could lead to interesting insights on the evolutionary stability of cellular computing.
For the entire collection see [Zbl 1358.68015].P colonies with evolving environmenthttps://www.zbmath.org/1483.681132022-05-16T20:40:13.078697Z"Ciencialová, Lucie"https://www.zbmath.org/authors/?q=ai:ciencialova.lucie"Cienciala, Luděk"https://www.zbmath.org/authors/?q=ai:cienciala.ludek"Sosík, Petr"https://www.zbmath.org/authors/?q=ai:sosik.petrSummary: We study two variants of P colonies with dynamic environment changing due to an underlying 0L scheme: P colonies with two objects inside each agent that can only consume objects, and P colonies with one object inside each agent which uses rewriting and communication rules. We show that the first kind of P colonies with one consumer agent can generate all sets of natural numbers computed by partially blind register machines. The second kind of P colonies with two agents with rewriting/communication rules is computationally complete. Finally, we demonstrate that P colonies with one such agent with checking programs can simulate catalytic P systems with one catalyst, and consequently, another relation to partially blind register machines is established.
For the entire collection see [Zbl 1358.68015].Continuation passing semantics for membrane systemshttps://www.zbmath.org/1483.681142022-05-16T20:40:13.078697Z"Ciobanu, Gabriel"https://www.zbmath.org/authors/?q=ai:ciobanu.gabriel"Todoran, Eneia Nicolae"https://www.zbmath.org/authors/?q=ai:todoran.eneia-nicolaeSummary: Membrane systems are described by a language in which multisets of objects are encapsulated in hierarchical structures of compartments. The language provides primitives for parallel communication of objects across membranes and a primitive for membrane creation. The behaviour of each membrane is specified by means of multiset rewriting rules. We provide a compositional semantics for membrane systems by using the continuation passing style and metric spaces.
For the entire collection see [Zbl 1358.68015].Minimal multiset grammars for recurrent dynamicshttps://www.zbmath.org/1483.681152022-05-16T20:40:13.078697Z"Farinelli, Alessandro"https://www.zbmath.org/authors/?q=ai:farinelli.alessandro"Franco, Giuditta"https://www.zbmath.org/authors/?q=ai:franco.giuditta"Rizzi, Romeo"https://www.zbmath.org/authors/?q=ai:rizzi.romeoSummary: A biochemical network modeled by a multiset grammar may be investigated from a dynamical viewpoint by a linear recurrence system. This interesting connection between computation by a multiset grammar and a (network) recurrent dynamics poses a minimization problem, which turns out to be NP-hard.
For the entire collection see [Zbl 1358.68015].Solution to motif finding problem in membraneshttps://www.zbmath.org/1483.681162022-05-16T20:40:13.078697Z"Gapuz, Katrina B."https://www.zbmath.org/authors/?q=ai:gapuz.katrina-b"Mendoza, Ephraim D."https://www.zbmath.org/authors/?q=ai:mendoza.ephraim-d"Juayong, Richelle Ann B."https://www.zbmath.org/authors/?q=ai:juayong.richelle-ann-b"Hernandez, Nestine Hope S."https://www.zbmath.org/authors/?q=ai:hernandez.nestine-hope-s"Cabarle, Francis George C."https://www.zbmath.org/authors/?q=ai:cabarle.francis-george-c"Adorna, Henry N."https://www.zbmath.org/authors/?q=ai:adorna.henry-natividadSummary: The study of genes is an important field of biology. A way to understand genetic composition is through finding regularly occurring nucleotide sequences, or motifs, in a DNA sequence. However, finding these motifs is difficult and is shown to be NP-complete. In this paper, we use a variant of P systems called Evolution-Communication P systems with energy using string objects to solve the Motif Finding Problem in \(O(lt)\)-time where \(l\) is the length of the motif and \(t\) is the number of DNA sequences given.
For the entire collection see [Zbl 1358.68015].Remarks on the computational power of some restricted variants of P systems with active membraneshttps://www.zbmath.org/1483.681172022-05-16T20:40:13.078697Z"Gazdag, Zsolt"https://www.zbmath.org/authors/?q=ai:gazdag.zsolt"Kolonits, Gábor"https://www.zbmath.org/authors/?q=ai:kolonits.gaborSummary: In this paper we consider three restricted variants of P systems with active membranes: (1) P systems using send-out communication rules only, (2) P systems using elementary membrane division and dissolution rules only, and (3) polarizationless P systems using dissolution and unit evolution rules only. We show that every problem in \(\mathbf {P}\) can be solved with uniform families of any of these variants using reasonably weak uniformity conditions. This, using known results on the upper bound of the computational power of variants (1) and (3) yields new characterizations of the class \(\mathbf {P}\). In the case of variant (2) we provide a further characterization of \(\mathbf {P}\) by giving a semantic restriction on the computations of P systems of this variant.
For the entire collection see [Zbl 1358.68015].Kernel P systems modelling, testing and verification -- sorting case studyhttps://www.zbmath.org/1483.681182022-05-16T20:40:13.078697Z"Gheorghe, Marian"https://www.zbmath.org/authors/?q=ai:gheorghe.marian"Ceterchi, Rodica"https://www.zbmath.org/authors/?q=ai:ceterchi.rodica"Ipate, Florentin"https://www.zbmath.org/authors/?q=ai:ipate.florentin"Konur, Savas"https://www.zbmath.org/authors/?q=ai:konur.savasSummary: A kernel P system (kP system, for short) integrates in a coherent and elegant manner many of the P system features most successfully used for modelling various applications and, consequently, it provides a framework for analyzing these models. In this paper, we illustrate the modelling capacity of kernel P systems by providing a number of kP system models for sorting algorithms. Furthermore, the problem of testing systems modelled as kP systems is also discussed and a test generation method based on automata is proposed. We also demonstrate how formal verification can be used to validate that the given models work as desired.
For the entire collection see [Zbl 1358.68015].Coping with dynamical structures for interdisciplinary applications of membrane computinghttps://www.zbmath.org/1483.681192022-05-16T20:40:13.078697Z"Hinze, Thomas"https://www.zbmath.org/authors/?q=ai:hinze.thomasSummary: Biological information processing and maintenance of life mainly utilise dynamical structures at different levels from a nanoscopic up to a macroscopic scale. Providing a high degree of reliability, reproducibility, unambiguousness, and addressability, underlying compositional processes appear as ideal candidates to perform computational tasks in a discretised manner. In this essay, we consider four levels in which dynamical structures enable an efficient handling with information: (1) the molecular level, (2) the level of reaction network modules, (3) the level of membranes, and (4) the level of higher-order organisms and populations. All of them have in common the capability of controlled memory-based state transitions and hence dedicated systems's configurations encoding behavioural patterns. Due to its discrete algebraic nature, membrane systems represent advantageous frameworks in order to formalise corresponding activities. This in turn paves the way towards efficient tools inspired by nature with manifold smart applications in engineering, computer science, and systems biology. We illustrate membrane system's abilities, benefits, and progress for coping with dynamical structures from an integrative perspective.
For the entire collection see [Zbl 1358.68015].Walking membranes: grid-exploring P systems with artificial evolution for multi-purpose topological optimisation of cascaded processeshttps://www.zbmath.org/1483.681202022-05-16T20:40:13.078697Z"Hinze, Thomas"https://www.zbmath.org/authors/?q=ai:hinze.thomas"Weber, Lea Louise"https://www.zbmath.org/authors/?q=ai:weber.lea-louise"Hatnik, Uwe"https://www.zbmath.org/authors/?q=ai:hatnik.uweSummary: The capability of self-organisation belongs to the most fascinating features of many living organisms. It results in formation and continuous adjustment of dedicated spatial structures which in turn can sustain a high fitness and efficient use of resources even if environmental conditions or internal factors tend to vary. Spatial structures in this context might for instance incorporate topological arrangements of cellular compartments and filaments towards fast and effective signal transduction. Due to its discrete nature, the P systems approach represents an ideal candidate in order to capture emergence and evolution of topologies composed of membranes passable by molecular particles. We introduce grid-exploring P systems in which generalised membranes form the grid elements keeping the grid structure variable. Particles initially placed at different positions of the grid's boundary individually run through the grid visiting a sequence of designated membranes in which they become successively processed. Using artificial evolution, the arrangement of membranes within the grid becomes optimised for shortening the total time duration necessary for complete passage and processing of all particles. Interestingly, the corresponding framework comprises numerous practical applications beyond modelling of biological self-organisation. When replacing membranes by queue-based treads, tools, or processing units and particles by customers, workpieces, or raw products, we obtain a multi-purpose optimisation strategy along with a simulation framework. Three case studies from cell signalling, retail industry, and manufacturing demonstrate various benefits from the concept.
For the entire collection see [Zbl 1358.68015].Array-rewriting P systems with basic puzzle grammar rules and permitting featureshttps://www.zbmath.org/1483.681212022-05-16T20:40:13.078697Z"Isawasan, Pradeep"https://www.zbmath.org/authors/?q=ai:isawasan.pradeep"Muniyandi, Ravie Chandren"https://www.zbmath.org/authors/?q=ai:muniyandi.ravie-chandren"Venkat, Ibrahim"https://www.zbmath.org/authors/?q=ai:venkat.ibrahim"Subramanian, K. G."https://www.zbmath.org/authors/?q=ai:subramanian.kumbakonam-govindarajanSummary: Motivated by the problem of tiling the plane, puzzle grammars were introduced as a mechanism for generating languages of picture arrays in the two-dimensional plane. On the other hand BPG array P system with array objects and basic puzzle grammar (BPG) rules was introduced as a variant of array generating P systems that were developed with a view to link the two areas of membrane computing and picture array grammars. Here we incorporate the feature of permitting symbols in the rules of the BPG array P system, thus introducing permitting array P system with BPG rules (pAP(BPG)). We show that the permitting feature gives more generative power to the BPG array P system with one or two membranes. We also show that a pAP(BPG) with only two membranes under t-communication mode, can generate picture arrays of square frames.
For the entire collection see [Zbl 1358.68015].On quantitative comparison of chemical reaction network modelshttps://www.zbmath.org/1483.681222022-05-16T20:40:13.078697Z"Kahramanoğulları, Ozan"https://www.zbmath.org/authors/?q=ai:kahramanogullari.ozanSummary: Chemical reaction networks (CRNs) provide a convenient language for modelling a broad variety of biological systems. These models are commonly studied with respect to the time series they generate in deterministic or stochastic simulations. Their dynamic behaviours are then analysed, often by using deterministic methods based on differential equations with a focus on the steady states. Here, we propose a method for comparing CRNs with respect to their behaviour in stochastic simulations. Our method is based on using the flux graphs that are delivered by stochastic simulations as abstract representations of their dynamic behaviour. This allows us to compare the behaviour of any two CRNs for any time interval, and define a notion of equivalence on them that overlaps with graph isomorphism at the lowest level of representation. The similarity between the compared CRNs can be quantified in terms of their distance. The results can then be used to refine the models or to replace a larger model with a smaller one that produces the same behaviour or vice versa.
For the entire collection see [Zbl 1441.68008].Agent-based simulation of kernel P systems with division rules using FLAMEhttps://www.zbmath.org/1483.681232022-05-16T20:40:13.078697Z"Lefticaru, Raluca"https://www.zbmath.org/authors/?q=ai:lefticaru.raluca"Macías-Ramos, Luis F."https://www.zbmath.org/authors/?q=ai:macias-ramos.luis-felipe"Niculescu, Ionuţ Mihai"https://www.zbmath.org/authors/?q=ai:niculescu.ionut-mihai"Mierlă, Laurenţiu"https://www.zbmath.org/authors/?q=ai:mierla.laurentiuSummary: Kernel P systems (or kP systems) bring together relevant features from several P systems flavours into a unified kernel model which allows solving complex problems using a straightforward code programming approach. kPWorkbench is a software suite enabling specification, parsing and simulation of kP systems models defined in the kernel P-Lingua (or kP-Lingua) programming language. It has been shown that any computation of a kP system involving only rewriting and communication rules can be simulated by a family of communicating stream X-machines (or CSXM), which are the core of FLAME agent based simulation environment. Following this, kPWorkbench enables translating kP-Lingua specifications into FLAME models, which can be simulated in a sequential or parallel (MPI based) way by using the FLAME framework. Moreover, FLAME GPU framework enables efficient simulation of CSXM on CUDA enabled GPGPU devices. In this paper we present an extension of kPWorkbench framework to generate FLAME models from kP-Lingua specifications including structural rules; and consider translation of FLAME specifications into FLAME GPU models. Also, we conduct a performance evaluation regarding simulation of equivalent kP systems and CSXM models in kPWorkbench and FLAME respectively.
For the entire collection see [Zbl 1358.68015].Shallow non-confluent P systemshttps://www.zbmath.org/1483.681242022-05-16T20:40:13.078697Z"Leporati, Alberto"https://www.zbmath.org/authors/?q=ai:leporati.alberto"Manzoni, Luca"https://www.zbmath.org/authors/?q=ai:manzoni.luca-e"Mauri, Giancarlo"https://www.zbmath.org/authors/?q=ai:mauri.giancarlo"Porreca, Antonio E."https://www.zbmath.org/authors/?q=ai:porreca.antonio-e"Zandron, Claudio"https://www.zbmath.org/authors/?q=ai:zandron.claudioSummary: We prove that non-confluent (i.e., strongly nondeterministic) P systems with active membranes working in polynomial time are able to simulate polynomial-space nondeterministic Turing machines, and thus to solve all PSPACE problems. Unlike the confluent case, this result holds for shallow P systems. In particular, depth~1 (i.e., only one membrane nesting level and using elementary membrane division only) already suffices, and neither dissolution nor send-in communication rules are needed.
For the entire collection see [Zbl 1358.68015].Solving hard problems by protein folding?https://www.zbmath.org/1483.681252022-05-16T20:40:13.078697Z"Lingas, Andrzej"https://www.zbmath.org/authors/?q=ai:lingas.andrzejSummary: Inspired by the NP-hardness of string folding problems modeling the natural process of protein folding, we discuss the idea of solving instances of NP-hard problems (e.g., string folding problems) of moderate size by letting artificially assembled proteins to fold. The accuracy with which one can combinatorially model the protein folding process, e.g., by string folding, as well as the precision with which one could experimentally estimate the energy of folded artificial proteins are crucial issues.
For the entire collection see [Zbl 1464.68016].Revising the membrane computing model for Byzantine agreementhttps://www.zbmath.org/1483.681262022-05-16T20:40:13.078697Z"Nicolescu, Radu"https://www.zbmath.org/authors/?q=ai:nicolescu.raduSummary: We refine our earlier version of P systems with complex symbols. The new version, called cP systems, enables the creation and manipulation of high-level data structures which are typical in high-level languages, such as: relations (graphs), associative arrays, lists, trees. We assess these capabilities by attempting a revised version of our previously best solution for the Byzantine agreement problem -- a famous problem in distributed algorithms, with non-trivial data structures and algorithms. In contrast to our previous solutions, which use a greater than exponential number of symbols and rules, the new solution uses a fixed sized alphabet and ruleset, independent of the problem size. The new ruleset follows closely the conceptual description of the algorithm. This revised framework opens the way to further extensions, which may bring P systems closer to the conceptual actor model.
For the entire collection see [Zbl 1358.68015].Rewriting P systems with flat-splicing ruleshttps://www.zbmath.org/1483.681272022-05-16T20:40:13.078697Z"Pan, Linqiang"https://www.zbmath.org/authors/?q=ai:pan.linqiang"Song, Bosheng"https://www.zbmath.org/authors/?q=ai:song.bosheng"Subramanian, K. G."https://www.zbmath.org/authors/?q=ai:subramanian.kumbakonam-govindarajanSummary: Rewriting P systems, as language generating devices, are one of the earliest classes of P systems with structured strings as objects and the rewriting rules as evolution rules. Flat splicing is an operation on strings, inspired by a splicing operation on circular strings. In this work, we consider a variant of rewriting P systems with only regular or linear rewriting rules and alphabetic flat splicing rules, and the language generative power of rewriting P systems with flat splicing rules in comparison with flat splicing systems and Chomsky hierarchy is investigated.
For the entire collection see [Zbl 1358.68015].A view of P systems from information theoryhttps://www.zbmath.org/1483.681282022-05-16T20:40:13.078697Z"Sempere, José M."https://www.zbmath.org/authors/?q=ai:sempere.jose-mSummary: In this work we propose new view of P systems by using the framework of information theory. Given a cell-like P system with communication and evolution rules, we analyze the amount of information that it holds as the result of symbol movements across the membranes. Under this approach, we propose new definitions and results related to the information of P systems and their entropy. In addition, we propose a new working manner for P systems based only in the entropy evolution during the computation time.
For the entire collection see [Zbl 1358.68015].Krivine machine and Taylor expansion in a non-uniform settinghttps://www.zbmath.org/1483.681292022-05-16T20:40:13.078697Z"Allioux, Antoine"https://www.zbmath.org/authors/?q=ai:allioux.antoineSummary: The Krivine machine is an abstract machine implementing the linear head reduction of \(\lambda\)-calculus. Ehrhard and Regnier gave a resource sensitive version returning the annotated form of a \(\lambda\)-term accounting for the resources used by the linear head reduction. These annotations take the form of terms in the resource \(\lambda\)-calculus.
We generalize this resource-driven Krivine machine to the case of the algebraic \(\lambda\)-calculus. The latter is an extension of the pure \(\lambda\)-calculus allowing for the linear combination of \(\lambda\)-terms with coefficients taken from a semiring. Our machine associates a \(\lambda\)-term \(M\) and a resource annotation \(t\) with a scalar \(\alpha\) in the semiring describing some quantitative properties of the linear head reduction of \(M\).
In the particular case of non-negative real numbers and of algebraic terms \(M\) representing probability distributions, the coefficient \(\alpha\) gives the probability that the linear head reduction actually uses exactly the resources annotated by \(t\). In the general case, we prove that the coefficient \(\alpha\) can be recovered from the coefficient of \(t\) in the Taylor expansion of \(M\) and from the normal form of \(t\).
For the entire collection see [Zbl 1436.68021].Linear \(\beta\)-reductionhttps://www.zbmath.org/1483.681302022-05-16T20:40:13.078697Z"Guerrini, Stefano"https://www.zbmath.org/authors/?q=ai:guerrini.stefanoSummary: Linear head reduction is a key tool for the analysis of reduction machines for \(\lambda\)-calculus and for game semantics. Its definition requires a notion of redex at a distance named primary redex in the literature. Nevertheless, a clear and complete syntactic analysis of this rule is missing. We present here a general notion of \(\beta\)-reduction at a distance and of linear reduction (i.e., not restricted to the head variable), and we analyse their relations and properties. This analysis rests on a variant of the so-called \(\sigma\)-equivalence that is more suitable for the analysis of reduction machines, since the position along the spine of primary redexes is not permuted. We finally show that, in the simply typed case, the proof of strong normalisation of linear reduction can be obtained by a trivial tuning of Gandy's proof for strong normalisation of \(\beta\)-reduction.
For the entire collection see [Zbl 1436.68021].Line reconfiguration by programmable particles maintaining connectivityhttps://www.zbmath.org/1483.681312022-05-16T20:40:13.078697Z"Nokhanji, Nooshin"https://www.zbmath.org/authors/?q=ai:nokhanji.nooshin"Santoro, Nicola"https://www.zbmath.org/authors/?q=ai:santoro.nicolaSummary: In the geometric Amoebot model, programmable matter is viewed as a very large number of identical micro/nano-sized entities, called particles, operating on a hexagonal tessellation of the plane, with limited computational capabilities, interacting only with neighboring particles, and moving from a grid node to an empty neighboring node. An important requirement, common to most research in this model, is that the particles must be connected at all times.
Within this model, a central concern has been the formation of geometric shapes; in particular, the line is the elementary shape used as the basis to form more complex shapes, and as a step to solve complex tasks. If some of the particles on the line are faulty it might be necessary for the non-faulty particles to reconstruct a line that does not contain faulty particles. In this paper we study the Connected Line Recovery problem of reconstructing the line without violating the connectivity requirement. We provide a complete feasibility characterization of the problem, identifying the conditions necessary for its solvability, and constructively proving the sufficiency of those conditions. Our algorithm allows the non-faulty particles to solve the problem, regardless of the initial distribution of the faults and of their number.
For the entire collection see [Zbl 1464.68016].How to implement a non-uniform or non-closed shufflehttps://www.zbmath.org/1483.681322022-05-16T20:40:13.078697Z"Saito, Takahiro"https://www.zbmath.org/authors/?q=ai:saito.takahiro"Miyahara, Daiki"https://www.zbmath.org/authors/?q=ai:miyahara.daiki"Abe, Yuta"https://www.zbmath.org/authors/?q=ai:abe.yuta"Mizuki, Takaaki"https://www.zbmath.org/authors/?q=ai:mizuki.takaaki"Shizuya, Hiroki"https://www.zbmath.org/authors/?q=ai:shizuya.hirokiSummary: Card-based protocols allow to perform secure multiparty computations using a deck of physical cards, and rely on shuffle actions such as the (normal) shuffle, the random cut, and the random bisection cut. A shuffle action is mathematically defined by a pair of a permutation set (which is a subset of the symmetric group) and a probability distribution on it; while one can theoretically consider any shuffle action in mind, he or she may be unable to determine whether it can be easily implemented by human hands. As one of the most general results, Koch and Walzer showed that any uniform closed shuffle (meaning that its permutation set is a subgroup and its distribution is uniform) can be implemented by human hands with the help of additional cards. However, there are several existing protocols which use non-uniform and/or non-closed shuffles. To implement these specific shuffles, Nishimura et al. proposed an idea of using (special) physical cases that can store piles of cards as well as Koch and Walzer proposed an implementation of a specific non-closed shuffle with additional cards. Because their implementations handle a limited class of non-uniform and/or non-closed shuffles, it is still open to find a general method for implementing any shuffle. In this paper, we solve the above problem; we implement ``any'' shuffle with only additional cards, provided that every probability of its distribution is a rational number. Therefore, our implementation works for any non-closed or non-uniform shuffle (if the distribution is rational as above).
For the entire collection see [Zbl 1464.68016].A parallel linear temporal logic tableauhttps://www.zbmath.org/1483.681332022-05-16T20:40:13.078697Z"McCabe-Dansted, John C."https://www.zbmath.org/authors/?q=ai:mccabe-dansted.john-christopher"Reynolds, Mark"https://www.zbmath.org/authors/?q=ai:reynolds.mark-alexanderSummary: For many applications, we are unable to take full advantage of the potential massive parallelisation offered by supercomputers or cloud computing because it is too hard to work out how to divide up the computation task between processors in such a way to minimise the need for communication. However, a recently developed branch-independent tableaux for the common LTL temporal logic should intuitively be easy to parallelise as each branch can be developed independently. Here we describe a simple technique for partitioning such a tableau such that each partition can be processed independently without need for interprocess communication. We investigate the extent to which this technique improves the performance of the LTL tableau on standard benchmarks and random formulas.
For the entire collection see [Zbl 1436.68017].Adaptively secure MPC with sublinear communication complexityhttps://www.zbmath.org/1483.681342022-05-16T20:40:13.078697Z"Cohen, Ran"https://www.zbmath.org/authors/?q=ai:cohen.ran"Shelat, Abhi"https://www.zbmath.org/authors/?q=ai:shelat.abhi"Wichs, Daniel"https://www.zbmath.org/authors/?q=ai:wichs.danielSummary: A central challenge in the study of MPC is to balance between security guarantees, hardness assumptions, and resources required for the protocol. In this work, we study the cost of tolerating adaptive corruptions in MPC protocols under various corruption thresholds. In the strongest setting, we consider adaptive corruptions of an arbitrary number of parties (potentially all) and achieve the following results:
\begin{itemize}
\item [--] A two-round secure function evaluation (SFE) protocol in the CRS model, assuming LWE and indistinguishability obfuscation (iO). The communication, the CRS size, and the online-computation are sublinear in the size of the function. The iO assumption can be replaced by secure erasures. Previous results required either the communication or the CRS size to be polynomial in the function size.
\item [--] Under the same assumptions, we construct a ``Bob-optimized'' 2PC (where Alice talks first, Bob second, and Alice learns the output). That is, the communication complexity and total computation of Bob are sublinear in the function size and in Alice's input size. We prove impossibility of ``Alice-optimized'' protocols.
\item [--] Assuming LWE, we bootstrap adaptively secure NIZK arguments to achieve proof size sublinear in the circuit size of the NP-relation.
\end{itemize}
On a technical level, our results are based on laconic function evaluation (LFE)
[\textit{W. Quach} et al., ``Laconic function evaluation and applications'', in: Proceedings of the 59th annual IEEE symposium on foundations of computer science, FOCS'18. Los Alamitos, CA: IEEE Computer Society. 859--870 (2018; \url{doi:10.1109/FOCS.2018.00086})]
and shed light on an interesting duality between LFE and FHE.
Next, we analyze adaptive corruptions of all-but-one of the parties and show a two-round SFE protocol in the threshold PKI model (where keys of a threshold FHE scheme are pre-shared among the parties) with communication complexity sublinear in the circuit size, assuming LWE and NIZK. Finally, we consider the honest-majority setting, and show a two-round SFE protocol with guaranteed output delivery under the same constraints.
For the entire collection see [Zbl 1428.94005].The corruption bound, log-rank, and communication complexityhttps://www.zbmath.org/1483.681352022-05-16T20:40:13.078697Z"Shraibman, Adi"https://www.zbmath.org/authors/?q=ai:shraibman.adiSummary: We prove upper bounds on deterministic communication complexity in terms of log of the rank and simple versions of the corruption bound. Our bounds are a simplified version of the results of
\textit{D. Gavinsky} and \textit{S. Lovett} [Lect. Notes Comput. Sci. 8572, 514--524 (2014; Zbl 1412.68064)],
using the same set of tools. We also give an elementary proof for the upper bound on communication complexity in terms of rank proved
by \textit{S. Lovett} [in: Proceedings of the 46th annual ACM symposium on theory of computing, STOC'14. New York, NY: Association for Computing Machinery (ACM). 842--846 (2014; Zbl 1315.68155); J. ACM 63, No. 1, Paper No. 1, 9 p. (2016; Zbl 1426.68132)].Anchored parallel repetition for nonlocal gameshttps://www.zbmath.org/1483.681362022-05-16T20:40:13.078697Z"Bavarian, Mohammad"https://www.zbmath.org/authors/?q=ai:bavarian.mohammad"Vidick, Thomas"https://www.zbmath.org/authors/?q=ai:vidick.thomas"Yuen, Henry"https://www.zbmath.org/authors/?q=ai:yuen.henry-cQuantum algorithm for Boolean equation solving and quantum algebraic attack on cryptosystemshttps://www.zbmath.org/1483.681372022-05-16T20:40:13.078697Z"Chen, Yu-Ao"https://www.zbmath.org/authors/?q=ai:chen.yu-ao"Gao, Xiao-Shan"https://www.zbmath.org/authors/?q=ai:gao.xiaoshanSummary: This paper presents a quantum algorithm to decide whether a Boolean equation system \(\mathcal{F}\) has a solution and to compute one if \(\mathcal{F}\) does have solutions with any given success probability. The runtime complexity of the algorithm is polynomial in the size of \(\mathcal{F}\) and the condition number of certain Macaulay matrix associated with \(\mathcal{F}\). As a consequence, the authors give a polynomial-time quantum algorithm for solving Boolean equation systems if their condition numbers are polynomial in the size of \(\mathcal{F}\). The authors apply the proposed quantum algorithm to the cryptanalysis of several important cryptosystems: The stream cipher Trivum, the block cipher AES, the hash function SHA-3/Keccak, the multivariate public key cryptosystems, and show that they are secure under quantum algebraic attack only if the corresponding condition numbers are large. This leads to a new criterion for designing such cryptosystems which are safe against the attack of quantum computers: The corresponding condition number.On the probability of finding marked connected components using quantum walkshttps://www.zbmath.org/1483.681382022-05-16T20:40:13.078697Z"Khadiev, K."https://www.zbmath.org/authors/?q=ai:khadiev.k-r"Nahimovs, N."https://www.zbmath.org/authors/?q=ai:nahimovs.nikolajs"Santos, R. A. M."https://www.zbmath.org/authors/?q=ai:santos.raqueline-azevedo-medeirosSummary: Finding a marked vertex in a graph can be a complicated task when using quantum walks. Recent results show that for two or more adjacent marked vertices search by quantum walk with Grover's coin may have no speed-up over classical exhaustive search. In this paper, we analyze the probability of finding a marked vertex for a set of connected components of marked vertices. We prove two upper bounds on the probability of finding a marked vertex and sketch further research directions.Quantum-inspired algorithm with evolution strategyhttps://www.zbmath.org/1483.681392022-05-16T20:40:13.078697Z"Ouskova Leonteva, Anna"https://www.zbmath.org/authors/?q=ai:ouskova-leonteva.anna"Abdulkarimova, Ulviya"https://www.zbmath.org/authors/?q=ai:abdulkarimova.ulviya"Jeannin-Girardon, Anne"https://www.zbmath.org/authors/?q=ai:jeannin-girardon.anne"Risser, Michel"https://www.zbmath.org/authors/?q=ai:risser.michel"Parrend, Pierre"https://www.zbmath.org/authors/?q=ai:parrend.pierre"Collet, Pierre"https://www.zbmath.org/authors/?q=ai:collet.pierre|collet.pierre.1Summary: Quantum-inspired algorithms are efficient for solving global search optimization problems. Nevertheless, their application is limited by two main requirements: a knowledge of a cost function and a big computational effort. To address both limitations, this paper presents a global optimization algorithm mixing a Quantum Diffusion Monte Carlo (DMC) method and an Evolution Strategy (ES). The proposed approach takes advantage of quantum models for efficiently finding solutions close the global optimum by applying the DMC. However, the DMC needs to be adapted to an optimization process. In order to improve relatively slow convergence rate of the DMC, we integrate a step size control for a diffusion displacement process of the DMC. Then, an \((1+1)\)-ES is applied to find the minimum value of the cost function. Experimentation shows that the proposed approach can find global minima with fewer objective function evaluations than Quantum Particle Swarm Optimization and Quantum Annealing. Thus, it seems to be promising for solving black-box problems, where a reliable trade-off between exploration and exploitation is required.
For the entire collection see [Zbl 1464.68016].Measuring the constrained reachability in quantum Markov chainshttps://www.zbmath.org/1483.681402022-05-16T20:40:13.078697Z"Xu, Ming"https://www.zbmath.org/authors/?q=ai:xu.ming"Huang, Cheng-Chao"https://www.zbmath.org/authors/?q=ai:huang.chengchao"Feng, Yuan"https://www.zbmath.org/authors/?q=ai:feng.yuanSummary: Constrained reachability is a kind of quantitative path property, which is generally specified by multiphase until formulas originated in continuous stochastic logic. In this paper, through proposing a positive operator valued measure on the set of infinite paths, we develop an exact method to solve the constrained reachability problem for quantum Markov chains. The convergence rate of the reachability is also obtained. We then analyse the complexity of the proposed method, which turns out to be in polynomial-time w.r.t. the size of the classical state space and the dimension of the accompanied Hilbert space. Finally, our method is implemented and applied to a simple quantum protocol.When symmetries are not enough: a hierarchy of hard constraint satisfaction problemshttps://www.zbmath.org/1483.681412022-05-16T20:40:13.078697Z"Gillibert, Pierre"https://www.zbmath.org/authors/?q=ai:gillibert.pierre"Jonušas, Julius"https://www.zbmath.org/authors/?q=ai:jonusas.julius"Kompatscher, Michael"https://www.zbmath.org/authors/?q=ai:kompatscher.michael"Mottet, Antoine"https://www.zbmath.org/authors/?q=ai:mottet.antoine"Pinsker, Michael"https://www.zbmath.org/authors/?q=ai:pinsker.michaelScheduling lower bounds via AND subset sumhttps://www.zbmath.org/1483.681422022-05-16T20:40:13.078697Z"Abboud, Amir"https://www.zbmath.org/authors/?q=ai:abboud.amir"Bringmann, Karl"https://www.zbmath.org/authors/?q=ai:bringmann.karl"Hermelin, Danny"https://www.zbmath.org/authors/?q=ai:hermelin.danny"Shabtay, Dvir"https://www.zbmath.org/authors/?q=ai:shabtay.dvirSummary: Given \(N\) instances \(( X_1, t_1), \ldots,( X_N, t_N)\) of Subset Sum, the AND Subset Sum problem asks to determine whether all of these instances are yes-instances; that is, whether each set of integers \(X_i\) has a subset that sums up to the target integer \(t_i\). We prove that this problem cannot be solved in time \(\widetilde{O}((N \cdot t_{\max})^{1 - \varepsilon})\), for \(t_{\max} = \max_i t_i\) and any \(\varepsilon > 0\), assuming the \(\forall \exists\) Strong Exponential Time Hypothesis \(( \forall \exists \)-SETH). We then use this result to exclude \(\widetilde{O}(n + p_{\max} \cdot n^{1 - \varepsilon})\)-time algorithms for several scheduling problems on \(n\) jobs with maximum processing time \(p_{\max}\), assuming \(\forall \exists \)-SETH. These include classical problems such as \(1 | | \sum w_j U_j\), the problem of minimizing the total weight of tardy jobs on a single machine, and \(P_2 | | \sum U_j\), the problem of minimizing the number of tardy jobs on two identical parallel machines.Minimum weight polygon triangulation problem in sub-cubic time boundhttps://www.zbmath.org/1483.681432022-05-16T20:40:13.078697Z"Bae, Sung Eun"https://www.zbmath.org/authors/?q=ai:bae.sung-eun"Shinn, Tong-Wook"https://www.zbmath.org/authors/?q=ai:shinn.tong-wook"Takaoka, Tadao"https://www.zbmath.org/authors/?q=ai:takaoka.tadaoSummary: We break the long standing cubic time bound of \(O(n^3)\) for the Minimum Weight Polygon Triangulation problem by showing that the well known dynamic programming algorithm, reported independently by Gilbert and Klincsek, can be optimized with a faster algorithm for the \((min,+)\)-product using look-up tables. In doing so, we also show that the well known Floyd-Warshall algorithm can be optimized in a similar manner to achieve a sub-cubic time bound for the All Pairs Shortest Paths problem without having to resort to recursion in the semi-ring theory.
For the entire collection see [Zbl 1377.68004].The complexity of online bribery in sequential electionshttps://www.zbmath.org/1483.681442022-05-16T20:40:13.078697Z"Hemaspaandra, Edith"https://www.zbmath.org/authors/?q=ai:hemaspaandra.edith"Hemaspaandra, Lane A."https://www.zbmath.org/authors/?q=ai:hemaspaandra.lane-a"Rothe, Jörg"https://www.zbmath.org/authors/?q=ai:rothe.jorg-matthiasSummary: Prior work on the complexity of bribery assumes that the bribery happens simultaneously, and that the briber has full knowledge of all votes. However, in many real-world settings votes come in sequentially, and the briber may have a use-it-or-lose-it moment to decide whether to alter a given vote, and when making that decision the briber may not know what votes remaining voters will cast. We introduce a model for, and initiate the study of, bribery in such an online, sequential setting. We show that even for election systems whose winner-determination problem is polynomial-time computable, an online, sequential setting may vastly increase the complexity of bribery, jumping the problem up to completeness for high levels of the polynomial hierarchy or even PSPACE. But we also show that for some natural, important election systems, such a dramatic complexity increase does not occur, and we pinpoint the complexity of their bribery problems.The complexity of finding read-once NAE-resolution refutationshttps://www.zbmath.org/1483.681452022-05-16T20:40:13.078697Z"Kleine Büning, Hans"https://www.zbmath.org/authors/?q=ai:kleine-buning.hans"Wojciechowski, Piotr"https://www.zbmath.org/authors/?q=ai:wojciechowski.piotr-j"Subramani, K."https://www.zbmath.org/authors/?q=ai:subramani.krishnanSummary: In this paper, we analyze Boolean formulas in conjunctive normal form (CNF) from the perspective of read-once resolution (ROR) refutation. A read-once (resolution) refutation is one in which each input clause is used at most once. It is well-known that read-once resolution is not complete, i.e., there exist unsatisfiable formulas for which no read-once resolution exists. Likewise, the problem of checking if a 3CNF formula has a read-once refutation is NP-complete. This paper is concerned with a variant of satisfiability called Not-All-Equal Satisfiability (NAE-Satisfiability). NAE-Satisfiability is the problem of checking whether an arbitrary CNF formula has a satisfying assignment in which at least one literal in each clause is set to false. It is well-known that NAE-satisfiability is NP-complete. Clearly, the class of CNF formulas which are NAE-satisfiable is a proper subset of the class of satisfiable CNF formulas. It follows that traditional resolution cannot always find a proof of NAE-unsatisfiability. Thus, traditional resolution is not a sound procedure for checking NAE-satisfiability. In this paper, we introduce a variant of resolution called NAE-resolution, which is a sound and complete procedure for checking NAE-satisfiability in CNF formulas. We focus on a variant of NAE-resolution called read-once NAE-resolution, in which each input clause can be part of at most one NAE-resolution step. Our principal result is that read-once NAE-resolution is a sound and complete procedure for checking the NAE-satisfiability of 2CNF formulas; we also provide a polynomial time algorithm to determine the shortest read-once NAE-resolution of a 2CNF formula. Finally, we establish that the problem of checking whether a 3CNF formula has a read-once NAE-resolution is NP-complete.
For the entire collection see [Zbl 1416.03005].On the parameterized parallel complexity and the vertex cover problemhttps://www.zbmath.org/1483.681462022-05-16T20:40:13.078697Z"Abu-Khzam, Faisal N."https://www.zbmath.org/authors/?q=ai:abu-khzam.faisal-n"Li, Shouwei"https://www.zbmath.org/authors/?q=ai:li.shouwei"Markarian, Christine"https://www.zbmath.org/authors/?q=ai:markarian.christine"Meyer auf der Heide, Friedhelm"https://www.zbmath.org/authors/?q=ai:meyer-auf-der-heide.friedhelm"Podlipyan, Pavel"https://www.zbmath.org/authors/?q=ai:podlipyan.pavelSummary: Efficiently parallelizable parameterized problems have been classified as being either in the class FPP (fixed-parameter parallelizable) or the class PNC (parameterized analog of NC), which contains FPP as a subclass. In this paper, we propose a more restrictive class of parallelizable parameterized problems called fixed-parameter parallel-tractable (FPPT). For a problem to be in FPPT, it should possess an efficient parallel algorithm not only from a theoretical standpoint but in practice as well. The primary distinction between FPPT and FPP is the parallel processor utilization, which is bounded by a polynomial function in the case of FPPT. We initiate the study of FPPT with the well-known \(k\)-vertex cover problem. In particular, we present a parallel algorithm that outperforms the best known parallel algorithm for this problem: using \(\mathcal {O}(m)\) instead of \(\mathcal {O}(n^2)\) parallel processors, the running time improves from \(4\log n + \mathcal {O}(k^k)\) to \(\mathcal {O}(k\cdot \log ^3 n)\), where \(m\) is the number of edges, \(n\) is the number of vertices of the input graph, and \(k\) is an upper bound of the size of the sought vertex cover. We also note that a few P-complete problems fall into FPPT including the Monotone Circuit Value problem (MCV) when the underlying graphs are bounded by a constant Euler genus.
For the entire collection see [Zbl 1377.68004].Backdoors into two occurrenceshttps://www.zbmath.org/1483.681472022-05-16T20:40:13.078697Z"Johannsen, Jan"https://www.zbmath.org/authors/?q=ai:johannsen.janSummary: Backdoor sets for the class CNF(2) of CNF-formulas in which every variable has at most two occurrences are studied in terms of parameterized complexity. The question whether there exists a CNF(2)-backdoor set of size \(k\) is hard for the class \(\mathsf{W} [2]\), for both weak and strong backdoors, and in both cases it becomes fixed-parameter tractable when restricted to inputs in \(d\)-CNF for a fixed \(d\).
Besides that, it is shown that the problem of finding weak backdoor sets is \(\mathsf{W} [2]\)-complete, for certain tractable cases. These are the first completeness results in lower levels of the \(\mathsf{W} \)-hierarchy for any backdoor set problems.DKL: an efficient algorithm for learning deterministic Kripke structureshttps://www.zbmath.org/1483.681482022-05-16T20:40:13.078697Z"Mazhar, Rabia"https://www.zbmath.org/authors/?q=ai:mazhar.rabia"Sindhu, Muddassar Azam"https://www.zbmath.org/authors/?q=ai:sindhu.muddassar-azamSummary: There has been a recent growth of interest in software engineering community to use grammatical inference, aka automaton learning, in software engineering applications. This is primarily due to the reason that capacity of underlying hardware resources has improved significantly over the last years; which has enabled the use of this approach beyond the toy examples in a greater frequency. In this paper, we present a new grammatical inference algorithm, DKL, to learn deterministic Kripke structures. The DKL is only the second of its kind for incremental learning of deterministic Kripke structures. Earlier, IKL algorithm was introduced for learning deterministic Kripke structures but it often constructs a hypothesis much larger in state size than the target Kripke structure. This problem is known as state-space explosion and it primarily occurs due to the sub-direct product construction used in the IKL design, which in turn affects time efficiency of IKL; especially when it is used for practical applications of software engineering. The DKL algorithm is designed to resolve the problem of state-space explosion. We give a proof of correctness and termination of the DKL algorithm. We also compared the performance of DKL with IKL by implementing an evaluation framework. Our results show that DKL is more efficient in terms of time in reaching the target automaton than the IKL algorithm and is not prone to the problem of state-space explosion.Formalized proofs of the infinity and normal form predicates in the first-order theory of rewritinghttps://www.zbmath.org/1483.681492022-05-16T20:40:13.078697Z"Lochmann, Alexander"https://www.zbmath.org/authors/?q=ai:lochmann.alexander"Middeldorp, Aart"https://www.zbmath.org/authors/?q=ai:middeldorp.aartSummary: We present a formalized proof of the regularity of the infinity predicate on ground terms. This predicate plays an important role in the first-order theory of rewriting because it allows to express the termination property. The paper also contains a formalized proof of a direct tree automaton construction of the normal form predicate, due to Comon.
For the entire collection see [Zbl 1471.68010].Derivatives of parsing expression grammarshttps://www.zbmath.org/1483.681502022-05-16T20:40:13.078697Z"Moss, Aaron"https://www.zbmath.org/authors/?q=ai:moss.aaronSummary: This paper introduces a new derivative parsing algorithm for recognition of parsing expression grammars. Derivative parsing is shown to have a polynomial worst-case time bound, an improvement on the exponential bound of the recursive descent algorithm. This work also introduces asymptotic analysis based on inputs with a constant bound on both grammar nesting depth and number of backtracking choices; derivative and recursive descent parsing are shown to run in linear time and constant space on this useful class of inputs, with both the theoretical bounds and the reasonability of the input class validated empirically. This common-case constant memory usage of derivative parsing is an improvement on the linear space required by the packrat algorithm.
For the entire collection see [Zbl 1433.68013].Minimalist grammar transition-based parsinghttps://www.zbmath.org/1483.681512022-05-16T20:40:13.078697Z"Stanojević, Miloš"https://www.zbmath.org/authors/?q=ai:stanojevic.milosSummary: Current chart-based parsers of Minimalist Grammars exhibit prohibitively high polynomial complexity that makes them unusable in practice. This paper presents a transition-based parser for minimalist grammars that approximately searches through the space of possible derivations by means of beam search, and does so very efficiently: the worst case complexity of building one derivation is \(O(n^2)\) and the best case complexity is \(O(n)\). This approximated inference can be guided by a trained probabilistic model that can condition on larger context than standard chart-based parsers. The transitions of the parser are very similar to the transitions of bottom-up shift-reduce parsers for context-free grammars, with additional transitions for online reordering of words during parsing in order to make non-projective derivations projective.
For the entire collection see [Zbl 1351.68009].Efficient completion of weighted automatahttps://www.zbmath.org/1483.681522022-05-16T20:40:13.078697Z"Waldmann, Johannes"https://www.zbmath.org/authors/?q=ai:waldmann.johannesSummary: We consider directed graphs with edge labels from a semiring. We present an algorithm that allows efficient execution of queries for existence and weights of paths, and allows updates of the graph: adding nodes and edges, and changing weights of existing edges.
We apply this method in the construction of matchbound certificates for automatically proving termination of string rewriting. We re-implement the decomposition/completion algorithm of
\textit{J. Endrullis} et al. [``Decomposing terminating rewrite relations'', in: Proceedings of the 8th international workshop on termination, WST'06, Seattle, Washington, 15--16 August 2006. 39--43 (2006)]
in our framework, and achieve comparable performance.
For the entire collection see [Zbl 1446.68009].Polynomial identification of \(\omega \)-automatahttps://www.zbmath.org/1483.681532022-05-16T20:40:13.078697Z"Angluin, Dana"https://www.zbmath.org/authors/?q=ai:angluin.dana"Fisman, Dana"https://www.zbmath.org/authors/?q=ai:fisman.dana"Shoval, Yaara"https://www.zbmath.org/authors/?q=ai:shoval.yaaraSummary: We study identification in the limit using polynomial time and data for models of \(\omega \)-automata. On the negative side we show that non-deterministic \(\omega \)-automata (of types Büchi, coBüchi, Parity or Muller) can not be polynomially learned in the limit. On the positive side we show that the \(\omega \)-language classes \(\mathbb{IB}\), \(\mathbb{IC}\), \(\mathbb{IP} \), and \(\mathbb{IM}\) that are defined by deterministic Büchi, coBüchi, parity, and Muller acceptors that are isomorphic to their right-congruence automata (that is, the right congruences of languages in these classes are fully informative) are identifiable in the limit using polynomial time and data. We further show that for these classes a characteristic sample can be constructed in polynomial time.
For the entire collection see [Zbl 1471.68010].Bisimulation metrics and norms for real-weighted automatahttps://www.zbmath.org/1483.681542022-05-16T20:40:13.078697Z"Balle, Borja"https://www.zbmath.org/authors/?q=ai:balle.borja"Gourdeau, Pascale"https://www.zbmath.org/authors/?q=ai:gourdeau.pascale"Panangaden, Prakash"https://www.zbmath.org/authors/?q=ai:panangaden.prakashSummary: We develop a new bisimulation (pseudo)metric for weighted finite automata (WFA) that generalizes Boreale's linear bisimulation relation. Our metrics are induced by seminorms on the state space of WFA. Our development is based on spectral properties of sets of linear operators. In particular, the joint spectral radius of the transition matrices of WFA plays a central role. We also study continuity properties of the bisimulation pseudometric, establish an undecidability result for computing the metric, and give a preliminary account of applications to spectral learning of weighted automata.On the descriptional complexity of operations on semilinear setshttps://www.zbmath.org/1483.681552022-05-16T20:40:13.078697Z"Beier, Simon"https://www.zbmath.org/authors/?q=ai:beier.simon"Holzer, Markus"https://www.zbmath.org/authors/?q=ai:holzer.markus"Kutrib, Martin"https://www.zbmath.org/authors/?q=ai:kutrib.martinSummary: We investigate the descriptional complexity of operations on semilinear sets. Roughly speaking, a semilinear set is the finite union of linear sets, which are built by constant and period vectors. The interesting parameters of a semilinear set are: (i) the maximal value that appears in the vectors of periods and constants and (ii) the number of such sets of periods and constants necessary to describe the semilinear set under consideration. More precisely, we prove upper bounds on the union, intersection, complementation, and inverse homomorphism. In particular, our result on the complementation upper bound answers an open problem from
[\textit{G. J. Lavado} et al., Lect. Notes Comput. Sci. 8614, 294--305 (2014; Zbl 1410.68208)].
For the entire collection see [Zbl 1433.68013].Polynomially ambiguous probabilistic automata on restricted languageshttps://www.zbmath.org/1483.681562022-05-16T20:40:13.078697Z"Bell, Paul C."https://www.zbmath.org/authors/?q=ai:bell.paul-cSummary: We consider the computability and complexity of decision questions for Probabilistic Finite Automata (PFA) with sub-exponential ambiguity. We show that the emptiness problem for strict and non-strict cut-points of polynomially ambiguous commutative PFA remains undecidable, implying that the problem is undecidable when inputs are from a letter monotonic language. We show that the problem remains undecidable over a binary input alphabet when the input word is over a bounded language, in the noncommutative case. In doing so, we introduce a new technique based upon the Turakainen construction of a PFA from a Weighted Finite Automaton which can be used to generate PFA of lower dimensions and of sub-exponential ambiguity. We also study freeness/injectivity problems for polynomially ambiguous PFA and study the border of decidability and tractability for various cases.Revisiting reachability in polynomial interrupt timed automatahttps://www.zbmath.org/1483.681572022-05-16T20:40:13.078697Z"Bérard, Béatrice"https://www.zbmath.org/authors/?q=ai:berard.beatrice"Haddad, Serge"https://www.zbmath.org/authors/?q=ai:haddad.sergeSummary: Polynomial Interrupt Timed Automata (\textsc{PolITA}) are finite automata with clocks organized along hierarchical levels. These clocks are equipped with an interruption mechanism, well suited to the modeling of real-time operating systems. Moreover, transitions between states contain polynomial guards and updates. The reachability problem in this class is known to be in 2\textsf{EXPTIME} with a decision procedure based on the cylindrical algebraic decomposition. We improve this complexity to \textsf{EXPSPACE} mainly using a combinatorial argument and we include a reduction leading to a \textsf{PSPACE} lower bound.Corrigendum to: ``Revisiting reachability in polynomial interrupt timed automata''https://www.zbmath.org/1483.681582022-05-16T20:40:13.078697Z"Bérard, Béatrice"https://www.zbmath.org/authors/?q=ai:berard.beatrice"Haddad, Serge"https://www.zbmath.org/authors/?q=ai:haddad.sergeCorrigendum to the authors' paper [ibid. 174, Article ID 106208, 6 p. (2022; Zbl 1483.68157)].Higher-order operator precedence languageshttps://www.zbmath.org/1483.681592022-05-16T20:40:13.078697Z"Crespi Reghizzi, Stefano"https://www.zbmath.org/authors/?q=ai:crespi-reghizzi.stefano"Pradella, Matteo"https://www.zbmath.org/authors/?q=ai:pradella.matteoSummary: Floyd's Operator Precedence (OP) languages are a deterministic context-free family having many desirable properties. They are locally and parallely parsable, and languages having a compatible structure are closed under Boolean operations, concatenation and star; they properly include the family of Visibly Pushdown (or Input Driven) languages. OP languages are based on three relations between any two consecutive terminal symbols, which assign syntax structure to words. We extend such relations to \(k\)-tuples of consecutive terminal symbols, by using the model of strictly locally testable regular languages of order \(k\geq 3\). The new corresponding class of \textit{Higher-order Operator Precedence languages} (HOP) properly includes the OP languages, and it is still included in the deterministic (also in reverse) context free family. We prove Boolean closure for each subfamily of structurally compatible HOP languages. In each subfamily, the top language is called \textit{max-language}. We show that such languages are defined by a simple cancellation rule and we prove several properties, in particular that max-languages make an infinite hierarchy ordered by parameter \(k\). HOP languages are a candidate for replacing OP languages in the various applications where they have have been successful though sometimes too restrictive.
For the entire collection see [Zbl 1433.68013].Beyond \(\omega BS\)-regular languages: \(\omega T\)-regular expressions and counter-check automatahttps://www.zbmath.org/1483.681602022-05-16T20:40:13.078697Z"Della Monica, Dario"https://www.zbmath.org/authors/?q=ai:della-monica.dario"Montanari, Angelo"https://www.zbmath.org/authors/?q=ai:montanari.angelo"Sala, Pietro"https://www.zbmath.org/authors/?q=ai:sala.pietroSummary: In the last years, various extensions of \(\omega\)-regular languages have been proposed in the literature, including \(\omega B\)-regular (\(\omega \)-regular languages extended with boundedness), \(\omega S\)-regular (\(\omega\)-regular languages extended with strict unboundedness), and \(\omega BS\)-regular languages (the combination of \(\omega B\)- and \(\omega S\)-regular ones). While the first two classes satisfy a generalized closure property, namely, the complement of an \(\omega B\)-regular (resp., \(\omega S\)-regular) language is an \(\omega S\)-regular (resp., \(\omega B\)-regular) one, the last class is not closed under complementation. The existence of non-\(\omega BS\)-regular languages that are the complements of some \(\omega BS\)-regular ones and express fairly natural properties of reactive systems motivates the search for other well-behaved classes of extended \(\omega\)-regular languages. In this paper, we introduce the class of \(\omega T\)-regular languages, that includes meaningful languages which are not \(\omega BS\)-regular. We first define it in terms of \(\omega T\)-regular expressions. Then, we introduce a new class of automata (counter-check automata) and we prove that (i) their emptiness problem is decidable in PTIME and (ii) they are expressive enough to capture \(\omega T\)-regular languages (whether or not \(\omega T\)-regular languages are expressively complete with respect to counter-check automata is still an open problem). Finally, we provide an encoding of \(\omega T\)-regular expressions into \textsf{S1S+U}.
For the entire collection see [Zbl 1436.68017].Approximation of weighted automata with storagehttps://www.zbmath.org/1483.681612022-05-16T20:40:13.078697Z"Denkinger, Tobias"https://www.zbmath.org/authors/?q=ai:denkinger.tobiasSummary: We use a non-deterministic variant of storage types to develop a framework for the approximation of automata with storage. This framework is used to provide automata-theoretic views on the approximation of multiple context-free languages and on coarse-to-fine parsing.
For the entire collection see [Zbl 1436.68017].Weighted operator precedence languageshttps://www.zbmath.org/1483.681622022-05-16T20:40:13.078697Z"Droste, Manfred"https://www.zbmath.org/authors/?q=ai:droste.manfred"Dück, Stefan"https://www.zbmath.org/authors/?q=ai:duck.stefan"Mandrioli, Dino"https://www.zbmath.org/authors/?q=ai:mandrioli.dino"Pradella, Matteo"https://www.zbmath.org/authors/?q=ai:pradella.matteoSummary: In the last years renewed investigation of operator precedence languages (OPL) led to discover important properties thereof: OPL are closed with respect to all major operations, are characterized, besides by the original grammar family, in terms of an automata family (OPA) and an MSO logic; furthermore they significantly generalize the well-known visibly pushdown languages (VPL). A different area of research investigates quantitative evaluations of formal languages by adding weights to strings. In this paper, we lay the foundation to marry these two research fields. We introduce weighted operator precedence automata and show how they are both strict extensions of OPA and weighted visibly pushdown automata. We prove a Nivat-like result which shows that quantitative OPL can be described by unweighted OPA and very particular weighted OPA. In a Büchi-like theorem, we show that weighted OPA are expressively equivalent to a weighted MSO-logic for OPL.The triple-pair construction for weighted \(\omega\)-pushdown automatahttps://www.zbmath.org/1483.681632022-05-16T20:40:13.078697Z"Droste, Manfred"https://www.zbmath.org/authors/?q=ai:droste.manfred"Ésik, Zoltán"https://www.zbmath.org/authors/?q=ai:esik.zoltan"Kuich, Werner"https://www.zbmath.org/authors/?q=ai:kuich.wernerSummary: Let \(S\) be a complete star-omega semiring and \(\Sigma\) be an alphabet. For a weighted \(\omega\)-pushdown automaton \(\mathscr{P}\) with stateset \(\{1, \dots, n\}\), \(n\geq 1\), we show that there exists a mixed algebraic system over a complete semiring-semimodule pair (\((S\ll\Sigma^*\gg )^{n\times n}\), \((S\ll\Sigma^\omega\gg)^n\)) such that the behavior \(\|\mathscr{P}\|\) of \(\mathscr{P}\) is a component of a solution of this system. In case the basic semiring is \(\mathbb{B}\) or \(\mathbb{N}^\infty\), we show that there exists a mixed context-free grammar that generates \(\|\mathscr{P}\|\). The construction of the mixed context-free grammar from \(\mathscr{P}\) is a generalization of the well known triple construction and is called now triple-pair construction for \(\omega\)-pushdown automata.
For the entire collection see [Zbl 1433.68013].MK-fuzzy automata and MSO logicshttps://www.zbmath.org/1483.681642022-05-16T20:40:13.078697Z"Droste, Manfred"https://www.zbmath.org/authors/?q=ai:droste.manfred"Kutsia, Temur"https://www.zbmath.org/authors/?q=ai:kutsia.temur"Rahonis, George"https://www.zbmath.org/authors/?q=ai:rahonis.george"Schreiner, Wolfgang"https://www.zbmath.org/authors/?q=ai:schreiner.wolfgangSummary: We introduce MK-fuzzy automata over a bimonoid \(K\) which is related to the fuzzification of the McCarthy-Kleene logic. Our automata are inspired by, and intend to contribute to, practical applications being in development in a project on runtime network monitoring based on predicate logic. We investigate closure properties of the class of recognizable MK-fuzzy languages accepted by MK-fuzzy automata as well as of deterministically recognizable MK-fuzzy languages accepted by their deterministic counterparts. Moreover, we establish a Nivat-like result for recognizable MK-fuzzy languages. We introduce an MK-fuzzy MSO logic and show the expressive equivalence of a fragment of this logic with MK-fuzzy automata, i.e., a Büchi type theorem.
For the entire collection see [Zbl 1436.68017].Probabilistic automata of bounded ambiguityhttps://www.zbmath.org/1483.681652022-05-16T20:40:13.078697Z"Fijalkow, Nathanaël"https://www.zbmath.org/authors/?q=ai:fijalkow.nathanael"Riveros, Cristian"https://www.zbmath.org/authors/?q=ai:riveros.cristian"Worrell, James"https://www.zbmath.org/authors/?q=ai:worrell.james-bSummary: Probabilistic automata are an extension of nondeterministic finite automata in which transitions are annotated with probabilities. Despite its simplicity, this model is very expressive and many algorithmic questions are undecidable. In this work we focus on the emptiness problem (and its variant the value problem), which asks whether a given probabilistic automaton accepts some word with probability greater than a given threshold. We consider finitely ambiguous probabilistic automata. Our main contributions are to construct efficient algorithms for analysing finitely ambiguous probabilistic automata through a reduction to a multi-objective optimisation problem called the stochastic path problem. We obtain a polynomial time algorithm for approximating the value of probabilistic automata of fixed ambiguity and a quasi-polynomial time algorithm for the emptiness problem for 2-ambiguous probabilistic automata. We complement these positive results by an inapproximability result stating that the value of finitely ambiguous probabilistic automata cannot be approximated unless \(\mathbf{P} = \mathbf{NP} \).The complexity of weakly recognizing morphismshttps://www.zbmath.org/1483.681662022-05-16T20:40:13.078697Z"Fleischer, Lukas"https://www.zbmath.org/authors/?q=ai:fleischer.lukas"Kufleitner, Manfred"https://www.zbmath.org/authors/?q=ai:kufleitner.manfredSummary: Weakly recognizing morphisms from free semigroups onto finite semigroups are a classical way for defining the class of \(\omega\)-regular languages, i.e., a set of infinite words is weakly recognizable by such a morphism if and only if it is accepted by some Büchi automaton. We study the descriptional complexity of various constructions and the computational complexity of various decision problems for weakly recognizing morphisms. The constructions we consider are the conversion from and to Büchi automata, the conversion into strongly recognizing morphisms, as well as complementation. We also show that the fixed membership problem is \(\mathsf{NC}^1\)-complete, the general membership problem is in \textsf{L} and that the inclusion, equivalence and universality problems are \textsf{NL}-complete. The emptiness problem is shown to be \textsf{NL}-complete if the input is given as a non-surjective morphism.Costs and rewards in priced timed automatahttps://www.zbmath.org/1483.681672022-05-16T20:40:13.078697Z"Fränzle, Martin"https://www.zbmath.org/authors/?q=ai:franzle.martin"Shirmohammadi, Mahsa"https://www.zbmath.org/authors/?q=ai:shirmohammadi.mahsa"Swaminathan, Mani"https://www.zbmath.org/authors/?q=ai:swaminathan.mani"Worrell, James"https://www.zbmath.org/authors/?q=ai:worrell.james-bSummary: We consider Pareto analysis of reachable states of multi-priced timed automata (MPTA): timed automata equipped with multiple observers that keep track of costs (to be minimised) and rewards (to be maximised) along a computation. Each observer has a constant non-negative derivative which may depend on the location of the MPTA.\par We study the Pareto Domination Problem, which asks whether it is possible to reach a target location via a run in which the accumulated costs and rewards Pareto dominate a given objective vector. We show that this problem is undecidable in general, but decidable for MPTA with at most three observers. For MPTA whose observers are all costs or all rewards, we show that the Pareto Domination Problem is PSPACE-complete. We also consider an \(\varepsilon\)-approximate Pareto Domination Problem that is decidable without restricting the number and types of observers.\par We develop connections between MPTA and Diophantine equations. Undecidability of the Pareto Domination Problem is shown by reduction from Hilbert's 10th Problem, while decidability for three observers is shown by a translation to a fragment of arithmetic involving quadratic forms.
For the entire collection see [Zbl 1392.68012].Costs and rewards in priced timed automatahttps://www.zbmath.org/1483.681682022-05-16T20:40:13.078697Z"Fränzle, Martin"https://www.zbmath.org/authors/?q=ai:franzle.martin"Shirmohammadi, Mahsa"https://www.zbmath.org/authors/?q=ai:shirmohammadi.mahsa"Swaminathan, Mani"https://www.zbmath.org/authors/?q=ai:swaminathan.mani"Worrell, James"https://www.zbmath.org/authors/?q=ai:worrell.james-bSummary: We consider Pareto analysis of multi-priced timed automata (MPTA) having multiple observers recording costs (to be minimised) and rewards (to be maximised) along a computation. We study the Pareto Domination Problem, which asks whether it is possible to reach a target location such that the accumulated costs and rewards Pareto dominate a given vector. We show that this problem is undecidable in general, but decidable for MPTA with at most three observers. We show the problem to be PSPACE-complete for MPTA recording only costs or only rewards. We also consider an approximate Pareto Domination that is decidable in exponential time with no restrictions on types and number of observers. We develop connections between MPTA and Diophantine equations. Undecidability of the Pareto Domination Problem is shown by reduction from Hilbert's 10th Problem, while decidability for three observers entails translation to a decidable fragment of arithmetic involving quadratic forms.\(O_n\) is an \(n\)-MCFLhttps://www.zbmath.org/1483.681692022-05-16T20:40:13.078697Z"Gebhardt, Kilian"https://www.zbmath.org/authors/?q=ai:gebhardt.kilian"Meunier, Frédéric"https://www.zbmath.org/authors/?q=ai:meunier.frederic"Salvati, Sylvain"https://www.zbmath.org/authors/?q=ai:salvati.sylvainSummary: Commutative properties in formal languages pose problems at the frontier of computer science, computational linguistics and computational group theory. A prominent problem of this kind is the position of the language \(O_n\), the language that contains the same number of letters \(a_i\) and \(\overline{a}_i\) with \(1 \leq i \leq n\), in the known classes of formal languages. It has recently been shown that \(O_n\) is a Multiple Context-Free Language (MCFL). However the more precise conjecture of Nederhof that \(O_n\) is an MCFL of dimension \(n\) was left open. We prove this conjecture using tools from algebraic topology. On our way, we prove a variant of the necklace splitting theorem.Reversible languages having finitely many reduced automatahttps://www.zbmath.org/1483.681702022-05-16T20:40:13.078697Z"Gelle, Kitti"https://www.zbmath.org/authors/?q=ai:gelle.kitti"Iván, Szabolcs"https://www.zbmath.org/authors/?q=ai:ivan.szabolcsSummary: Reversible forms of computations are often interesting from an energy efficiency point of view. When the computation device in question is an automaton, it is known that the minimal reversible automaton recognizing a given language is not necessarily unique, moreover, there are languages having arbitrarily large reversible recognizers possessing no nontrivial reversible congruence. However, the exact characterization of this class of languages was open. In this paper we give a forbidden pattern capturing the reversible regular languages having only finitely many reduced reversible automata, allowing an efficient (NL) decision procedure.
For the entire collection see [Zbl 1433.68013].Weighted automata computation of edit distances with consolidations and fragmentationshttps://www.zbmath.org/1483.681712022-05-16T20:40:13.078697Z"Giraud, Mathieu"https://www.zbmath.org/authors/?q=ai:giraud.mathieu"Jacquemard, Florent"https://www.zbmath.org/authors/?q=ai:jacquemard.florentSummary: We study edit distances between strings, based on weighted operations such as character substitutions, insertions, deletions, and \textit{consolidations} and \textit{fragmentations}. The two latter operations transform a sequence of characters into one character and vice-versa. They correspond to the compression and expansion in Dynamic Time-Warping algorithms for speech recognition and are used for the formal analysis of written music. We show that such edit distances are not computable in general, and propose weighted automata constructions to compute restricted cases of edit distances, taking into account both consolidations and deletions, or both fragmentations and insertions. Assuming that the operation ruleset has a constant size, these constructions are polynomial into the lengths of the involved strings. We finally show that the optimal weight of sequences made of consolidations chained with fragmentations, in that order, is computable for arbitrary rulesets, and not computable for some rulesets when reversing the order of fragmentations and consolidations.Input-driven double-head pushdown automatahttps://www.zbmath.org/1483.681722022-05-16T20:40:13.078697Z"Holzer, Markus"https://www.zbmath.org/authors/?q=ai:holzer.markus"Kutrib, Martin"https://www.zbmath.org/authors/?q=ai:kutrib.martin"Malcher, Andreas"https://www.zbmath.org/authors/?q=ai:malcher.andreas"Wendlandt, Matthias"https://www.zbmath.org/authors/?q=ai:wendlandt.matthiasSummary: We introduce and study input-driven deterministic and nondeterministic double-head pushdown automata. A double-head pushdown automaton is a slight generalization of an ordinary pushdown automaton working with two input heads that move in opposite directions on the common input tape. In every step one head is moved and the automaton decides on acceptance if the heads meet. Demanding the automaton to work input-driven it is required that every input symbol uniquely defines the action on the pushdown store (push, pop, state change). Normally this is modeled by a partition of the input alphabet and is called a \textit{signature}. Since our automaton model works with two heads either both heads respect the same signature or each head owes its own signature. This results in two variants of input-driven double-head pushdown automata. The induced language families on input-driven double-head pushdown automata are studied from the perspectives of their language describing capability, their closure properties, and decision problems.
For the entire collection see [Zbl 1433.68013].Emptiness problems for distributed automatahttps://www.zbmath.org/1483.681732022-05-16T20:40:13.078697Z"Kuusisto, Antti"https://www.zbmath.org/authors/?q=ai:kuusisto.antti"Reiter, Fabian"https://www.zbmath.org/authors/?q=ai:reiter.fabianSummary: We investigate the decidability of the emptiness problem for three classes of distributed automata. These devices operate on finite directed graphs, acting as networks of identical finite-state machines that communicate in an infinite sequence of synchronous rounds. The problem is shown to be decidable in \textsc{LogSpace} for a class of forgetful automata, where the nodes see the messages received from their neighbors but cannot remember their own state. When restricted to the appropriate families of graphs, these forgetful automata are equivalent to classical finite word automata, but strictly more expressive than finite tree automata. On the other hand, we also show that the emptiness problem is undecidable in general. This already holds for two heavily restricted classes of distributed automata: those that reject immediately if they receive more than one message per round, and those whose state diagram must be acyclic except for self-loops.
For the entire collection see [Zbl 1436.68017].Weakly and strongly irreversible regular languageshttps://www.zbmath.org/1483.681742022-05-16T20:40:13.078697Z"Lavado, Giovanna J."https://www.zbmath.org/authors/?q=ai:lavado.giovanna-janet"Pighizzini, Giovanni"https://www.zbmath.org/authors/?q=ai:pighizzini.giovanni"Prigioniero, Luca"https://www.zbmath.org/authors/?q=ai:prigioniero.lucaSummary: Finite automata whose computations can be reversed, at any point, by knowing the last \(k\) symbols read from the input, for a fixed \(k\), are considered. These devices and their accepted languages are called \(k\)-reversible automata and \(k\)-reversible languages, respectively. The existence of \(k\)-reversible languages which are not \((k-1)\)-reversible is known, for each \(k>1\). This gives an infinite hierarchy of \textit{weakly irreversible languages}, i.e., languages which are \(k\)-reversible for some \(k\). Conditions characterizing the class of \(k\)-reversible languages, for each fixed \(k\), and the class of weakly irreversible languages are obtained. From these conditions, a procedure that given a finite automaton decides if the accepted language is weakly or strongly (i.e., not weakly) irreversible is described. Furthermore, a construction which allows to transform any finite automaton which is not \(k\)-reversible, but which accepts a \(k\)-reversible language, into an equivalent \(k\)-reversible finite automaton, is presented.
For the entire collection see [Zbl 1433.68013].Descriptional complexity of non-unary self-verifying symmetric difference automatahttps://www.zbmath.org/1483.681752022-05-16T20:40:13.078697Z"Marais, Laurette"https://www.zbmath.org/authors/?q=ai:marais.laurette"van Zijl, Lynette"https://www.zbmath.org/authors/?q=ai:van-zijl.lynetteSummary: Previously, self-verifying symmetric difference automata were defined and a tight bound of \(2^{n-1}-1\) was shown for state complexity in the unary case. We now consider the non-unary case and show that, for every \(n\geq2\), there is a regular language \(\mathscr{L}_n\) accepted by a non-unary self-verifying symmetric difference nondeterministic automaton with \(n\) states, such that its equivalent minimal deterministic finite automaton has \(2^{n-1}\) states. Also, given any SV-XNFA with \(n\) states, it is possible, up to isomorphism, to find at most another \(|GL(n, \mathbb{Z}_2)|-1\) equivalent SV-XNFA.
For the entire collection see [Zbl 1433.68013].CD grammar systems with two propagating scattered context components characterize the family of context sensitive languageshttps://www.zbmath.org/1483.681762022-05-16T20:40:13.078697Z"Meduna, Alexander"https://www.zbmath.org/authors/?q=ai:meduna.alexander"Martiško, Jakub"https://www.zbmath.org/authors/?q=ai:martisko.jakubSummary: The \(\mathscr{L}(PSCG)=\mathscr{L}(CS)\) problem asks whether propagating scattered context grammars and context sensitive grammars are equivalent. The presented paper reformulates and answers this problem in terms of CD grammar systems. More specifically, it characterizes the family of context sensitive languages by two-component CD grammar systems with propagating scattered context rules.
For the entire collection see [Zbl 1433.68013].LTL to deterministic Emerson-Lei automatahttps://www.zbmath.org/1483.681772022-05-16T20:40:13.078697Z"Müller, David"https://www.zbmath.org/authors/?q=ai:muller.david"Sickert, Salomon"https://www.zbmath.org/authors/?q=ai:sickert.salomonSummary: We introduce a new translation from linear temporal logic (LTL) to deterministic Emerson-Lei automata, which are \(\omega\)-automata with a Muller acceptance condition symbolically expressed as a Boolean formula. The richer acceptance condition structure allows the shift of complexity from the state space to the acceptance condition. Conceptually the construction is an enhanced product construction that exploits knowledge of its components to reduce the number of states. We identify two fragments of LTL, for which one can easily construct deterministic automata and show how knowledge of these components can reduce the number of states. We extend this idea to a general LTL framework, where we can use arbitrary LTL to deterministic automata translators for parts of formulas outside the mentioned fragments. Further, we show succinctness of the translation compared to existing construction. The construction is implemented in the tool \texttt{Delag}, which we evaluate on several benchmarks of LTL formulas and probabilistic model checking case studies.
For the entire collection see [Zbl 1436.68017].A new sensing \(5'\rightarrow 3'\) Watson-Crick automata concepthttps://www.zbmath.org/1483.681782022-05-16T20:40:13.078697Z"Nagy, Benedek"https://www.zbmath.org/authors/?q=ai:nagy.benedek"Parchami, Shaghayegh"https://www.zbmath.org/authors/?q=ai:parchami.shaghayegh"Mir-Mohammad-Sadeghi, Hamid"https://www.zbmath.org/authors/?q=ai:mir-mohammad-sadeghi.hamidSummary: Watson-Crick (WK) finite automata are working on a Watson-Crick tape, that is, on a DNA molecule. Therefore, it has two reading heads. While in traditional WK automata both heads read the whole input in the same physical direction, in \(5'\rightarrow 3'\) WK automata the heads start from the two extremes and read the input in opposite direction. In sensing \(5'\rightarrow 3'\) WK automata the process on the input is finished when the heads meet. Since the heads of a WK automaton may read longer strings in a transition, in previous models a so-called sensing parameter took care for the proper meeting of the heads (not allowing to read the same positions of the input in the last step). In this paper, a new model is investigated, which works without the sensing parameter (it is done by an appropriate change of the concept of configuration). Consequently, the accepted language classes of the variants are also changed. Various hierarchy results are proven in the paper.
For the entire collection see [Zbl 1433.68013].Exact affine counter automatahttps://www.zbmath.org/1483.681792022-05-16T20:40:13.078697Z"Nakanishi, Masaki"https://www.zbmath.org/authors/?q=ai:nakanishi.masaki"Khadiev, Kamil"https://www.zbmath.org/authors/?q=ai:khadiev.kamil"Prūsis, Krišjānis"https://www.zbmath.org/authors/?q=ai:prusis.krisjanis"Vihrovs, Jevgēnijs"https://www.zbmath.org/authors/?q=ai:vihrovs.jevgenijs"Yakaryılmaz, Abuzer"https://www.zbmath.org/authors/?q=ai:yakaryilmaz.abuzerSummary: We introduce an affine generalization of counter automata, and analyze their ability as well as affine finite automata. Our contributions are as follows. We show that there is a language that can be recognized by exact realtime affine counter automata but by neither 1-way deterministic pushdown automata nor realtime deterministic \(k\)-counter automata. We also show that a certain promise problem, which is conjectured not to be solved by two-way quantum finite automata in polynomial time, can be solved by Las Vegas affine finite automata. Lastly, we show that how a counter helps for affine finite automata by showing that the language \texttt{MANYTWINS}, which is conjectured not to be recognized by affine, quantum or classical finite state models in polynomial time, can be recognized by affine counter automata with one-sided bounded-error in realtime.
For the entire collection see [Zbl 1433.68013].On h-lexicalized restarting automatahttps://www.zbmath.org/1483.681802022-05-16T20:40:13.078697Z"Plátek, Martin"https://www.zbmath.org/authors/?q=ai:platek.martin"Otto, Friedrich"https://www.zbmath.org/authors/?q=ai:otto.friedrichSummary: Following some previous studies on restarting automata, we introduce a refined model -- the \textit{h-lexicalized restarting automaton} (h-RLWW). We argue that this model is useful for expressing lexicalized syntax in computational linguistics. We compare the input languages, which are the languages traditionally considered in automata theory, to the so-called \textit{basic} and \textit{h-proper languages}, which are (implicitly) used by categorial grammars, the original tool for the description of lexicalized syntax. The basic and h-proper languages allow us to stress several nice properties of h-lexicalized restarting automata, and they are suitable for modeling the analysis by reduction and, subsequently, for the development of categories of a lexicalized syntax. Based on the fact that a two-way deterministic monotone restarting automaton can be transformed into an equivalent deterministic monotone RL-automaton in (Marcus) contextual form, we obtain a transformation from monotone RLWW-automata that recognize the class CFL of context-free languages as their input languages to deterministic monotone h-RLWW-automata that recognize CFL through their h-proper languages. Through this transformation we obtain automata with the \textit{complete correctness preserving property} and an infinite hierarchy within CFL, based on the size of the read/write window. Additionally, we consider h-RLWW-automata that are allowed to perform multiple rewrite steps per cycle, and we establish another infinite hierarchy above CFL that is based on the number of rewrite steps that may be executed within a cycle. The corresponding separation results and their proofs illustrate the transparency of h-RLWW-automata that work with the (complete or cyclic) correctness preserving property
For the entire collection see [Zbl 1433.68013].Probabilistic Kleene algebra with testshttps://www.zbmath.org/1483.681812022-05-16T20:40:13.078697Z"Qiao, Rui"https://www.zbmath.org/authors/?q=ai:qiao.rui"Wu, Jin Zhao"https://www.zbmath.org/authors/?q=ai:wu.jinzhao(no abstract)Distributive laws for monotone specificationshttps://www.zbmath.org/1483.681822022-05-16T20:40:13.078697Z"Rot, Jurriaan"https://www.zbmath.org/authors/?q=ai:rot.jurriaanSummary: Turi and Plotkin introduced an elegant approach to structural operational semantics based on universal coalgebra, parametric in the type of syntax and the type of behaviour. Their framework includes abstract GSOS, a categorical generalisation of the classical GSOS rule format, as well as its categorical dual, coGSOS. Both formats are well behaved, in the sense that each specification has a unique model on which behavioural equivalence is a congruence. Unfortunately, the combination of the two formats does not feature these desirable properties. We show that \textit{monotone} specifications -- that disallow negative premises -- do induce a canonical distributive law of a monad over a comonad, and therefore a unique, compositional interpretation.
For the entire collection see [Zbl 1433.68033].Information systems for continuous semi-latticeshttps://www.zbmath.org/1483.681832022-05-16T20:40:13.078697Z"Wang, Longchun"https://www.zbmath.org/authors/?q=ai:wang.longchun"Zhou, Xiangnan"https://www.zbmath.org/authors/?q=ai:zhou.xiangnan"Li, Qingguo"https://www.zbmath.org/authors/?q=ai:li.qingguoSummary: We consider two problems on representation theory for domains by information systems. The first problem is how to represent continuous semi-lattices as information systems, and the second one is which subclass of the continuous information systems designed by Spreen et al. for continuous domains can be used to represent bounded complete domains. We solve the first problem by defining a notion of SL-information systems. This is a special case of the continuous information systems. We also discuss the morphism part of SL-information systems, which corresponds to Scott-continuous functions between continuous semi-lattices. Noting that each bounded complete domain is a special continuous semi-lattices, we provide a correct solution to the second problem by properly strengthening the conditions of SL-information systems.Comparative analysis of statistical model checking toolshttps://www.zbmath.org/1483.681842022-05-16T20:40:13.078697Z"Bakir, Mehmet Emin"https://www.zbmath.org/authors/?q=ai:bakir.mehmet-emin"Gheorghe, Marian"https://www.zbmath.org/authors/?q=ai:gheorghe.marian"Konur, Savas"https://www.zbmath.org/authors/?q=ai:konur.savas"Stannett, Mike"https://www.zbmath.org/authors/?q=ai:stannett.mikeSummary: Statistical model checking is a powerful and flexible approach for formal verification of computational models, e.g. P systems, which can have very large search spaces. Various statistical model checking tools have been developed, but choosing the most efficient and appropriate tool requires a significant degree of experience, not only because different tools have different modelling and property specification languages, but also because they may be designed to support only a certain subset of property types. Furthermore, their performance can vary depending on the property types and membrane systems being verified. In this paper, we evaluate the performance of various common statistical model checkers based on a pool of biological models. Our aim is to help users select the most suitable SMC tools from among the available options, by comparing their modelling and property specification languages, capabilities and performances.
For the entire collection see [Zbl 1358.68015].Relaxing exclusive control in Boolean gameshttps://www.zbmath.org/1483.681852022-05-16T20:40:13.078697Z"Belardinelli, Francesco"https://www.zbmath.org/authors/?q=ai:belardinelli.francesco"Grandi, Umberto"https://www.zbmath.org/authors/?q=ai:grandi.umberto"Herzig, Andreas"https://www.zbmath.org/authors/?q=ai:herzig.andreas"Longi, Dominiquen"https://www.zbmath.org/authors/?q=ai:longi.dominiquen"Lorini, Emiliano"https://www.zbmath.org/authors/?q=ai:lorini.emiliano"Novaro, Arianna"https://www.zbmath.org/authors/?q=ai:novaro.arianna"Perrussel, Laurent"https://www.zbmath.org/authors/?q=ai:perrussel.laurentSummary: In the typical framework for Boolean games (BG) each player can change the truth value of some propositional atoms, while attempting to make her goal true. In standard BG goals are propositional formulas, whereas in iterated BG goals are formulas of Linear Temporal Logic. Both notions of BG are characterised by the fact that agents have exclusive control over their set of atoms, meaning that no two agents can control the same atom. In the present contribution we drop the exclusivity assumption and explore structures where an atom can be controlled by multiple agents. We introduce Concurrent Game Structures with Shared Propositional Control (CGS-SPC) and show that they ac- count for several classes of repeated games, including iterated Boolean games, influence games, and aggregation games. Our main result shows that, as far as verification is concerned, CGS-SPC can be reduced to concurrent game structures with exclusive control. This result provides a polynomial reduction for the model checking problem of specifications in Alternating-time Temporal Logic on CGS-SPC.
For the entire collection see [Zbl 1446.68014].On the complexity of model checking for syntactically maximal fragments of the interval temporal logic HS with regular expressionshttps://www.zbmath.org/1483.681862022-05-16T20:40:13.078697Z"Bozzelli, Laura"https://www.zbmath.org/authors/?q=ai:bozzelli.laura"Molinari, Alberto"https://www.zbmath.org/authors/?q=ai:molinari.alberto"Montanari, Angelo"https://www.zbmath.org/authors/?q=ai:montanari.angelo"Peron, Adriano"https://www.zbmath.org/authors/?q=ai:peron.adrianoSummary: In this paper, we investigate the model checking (MC) problem for Halpern and Shoham's interval temporal logic \textsf{HS}. In the last years, interval temporal logic MC has received an increasing attention as a viable alternative to the traditional (point-based) temporal logic MC, which can be recovered as a special case. Most results have been obtained under the homogeneity assumption, that constrains a proposition letter to hold over an interval if and only if it holds over each component state. Recently, Lomuscio and Michaliszyn proposed a way to relax such an assumption by exploiting regular expressions to define the behaviour of proposition letters over intervals in terms of their component states. When homogeneity is assumed, the exact complexity of MC is a difficult open question for full \textsf{HS} and for its two syntactically maximal fragments \(\mathsf{A\overline{A}B\overline{BE}}\) and \(\mathsf{A\overline{A}E\overline{BE}}\). In this paper, we provide an asymptotically optimal bound to the complexity of these two fragments under the more expressive semantic variant based on regular expressions by showing that their MC problem is \(\mathbf{AEXP_{pol}}\)-complete, where \(\mathbf{AEXP_{pol}}\) denotes the complexity class of problems decided by exponential-time bounded alternating Turing Machines making a polynomially bounded number of alternations.
For the entire collection see [Zbl 1436.68017].On the complexity of \(\mathsf{ATL}\) and \(\mathsf{ATL}^*\) module checkinghttps://www.zbmath.org/1483.681872022-05-16T20:40:13.078697Z"Bozzelli, Laura"https://www.zbmath.org/authors/?q=ai:bozzelli.laura"Murano, Aniello"https://www.zbmath.org/authors/?q=ai:murano.anielloSummary: \textit{Module checking} has been introduced in late 1990s to verify open systems, i.e., systems whose behavior depends on the continuous interaction with the environment. Classically, module checking has been investigated with respect to specifications given as \(\mathsf{CTL}\) and \(\mathsf{CTL}^*\) formulas. Recently, it has been shown that \(\mathsf{CTL}\) (resp., \(\mathsf{CTL}^*\)) module checking offers a distinctly different perspective from the better-known problem of \(\mathsf{ATL}\) (resp., \(\mathsf{ATL}^*\)) \textit{model} checking. In particular, \(\mathsf{ATL}\) (resp., \(\mathsf{ATL}^*\)) module checking strictly enhances the expressiveness of both \(\mathsf{CTL}\) (resp., \(\mathsf{CTL}^*\)) module checking and \(\mathsf{ATL}\) (resp. \(\mathsf{ATL}^*\)) model checking. In this paper, we provide asymptotically optimal bounds on the computational cost of module checking against \(\mathsf{ATL}\) and \(\mathsf{ATL}^*\), whose upper bounds are based on an automata-theoretic approach. We show that module-checking for \(\mathsf{ATL}\) is \textsc{Exptime}-complete, which is the same complexity of module checking against CTL. On the other hand, \(\mathsf{ATL}^*\) module checking turns out to be 3\textsc{Exptime}-complete, hence exponentially harder than \(\mathsf{CTL}^*\) module checking.
For the entire collection see [Zbl 1436.68017].A backward-traversal-based approach for symbolic model checking of uniform strategies for constrained reachabilityhttps://www.zbmath.org/1483.681882022-05-16T20:40:13.078697Z"Busard, Simon"https://www.zbmath.org/authors/?q=ai:busard.simon"Pecheur, Charles"https://www.zbmath.org/authors/?q=ai:pecheur.charlesSummary: Since the introduction of Alternating-time Temporal Logic (\textit{ATL}), many logics have been proposed to reason about different strategic capabilities of the agents of a system. In particular, some logics have been designed to reason about the uniform memoryless strategies of such agents. These strategies are the ones the agents can effectively play by only looking at what they observe from the current state. \(\mathit{ATL}_\mathit{ir}\) can be seen as the core logic to reason about such uniform strategies. Nevertheless, its model-checking problem is difficult -- it requires a polynomial number of calls to an NP oracle -- , and practical algorithms to solve it appeared only recently.
This paper proposes a technique for model checking uniform memoryless strategies. Existing techniques build the strategies from the states of interest -- such as the initial states -- through a forward traversal of the system. On the other hand, the proposed approach builds the winning strategies from the target states through a backward traversal, making sure that only uniform strategies are explored. Nevertheless, building the strategies from the ground up limits its applicability to constrained reachability objectives only. This paper describes the approach in details and compares it experimentally with existing approaches implemented into a BDD-based framework. These experiments show that the technique is competitive on the cases it can handle.
For the entire collection see [Zbl 1436.68017].Proving properties of sorting programs: a case study in Horn clause verificationhttps://www.zbmath.org/1483.681892022-05-16T20:40:13.078697Z"De Angelis, Emanuele"https://www.zbmath.org/authors/?q=ai:de-angelis.emanuele"Fioravanti, Fabio"https://www.zbmath.org/authors/?q=ai:fioravanti.fabio"Pettorossi, Alberto"https://www.zbmath.org/authors/?q=ai:pettorossi.alberto"Proietti, Maurizio"https://www.zbmath.org/authors/?q=ai:proietti.maurizioSummary: The proof of a program property can be reduced to the proof of satisfiability of a set of constrained Horn clauses (CHCs) which can be automatically generated from the program and the property. In this paper we have conducted a case study in Horn clause verification by considering several sorting programs with the aim of exploring the effectiveness of a transformation technique which allows us to eliminate inductive data structures such as lists or trees. If this technique is successful, we derive a set of CHCs with constraints over the integers and booleans only, and the satisfiability check can often be performed in an effective way by using state-of-the-art CHC solvers, such as Eldarica or Z3. In this case study we have also illustrated the usefulness of a companion technique based on the introduction of the so-called \textit{difference predicates}, whose definitions correspond to lemmata required during the verification. We have considered functional programs which implement the following kinds of sorting algorithms acting on lists of integers: (i) linearly recursive sorting algorithms, such as insertion sort and selection sort, and (ii) non-linearly recursive sorting algorithms, such as quicksort and mergesort, and we have considered the following properties: (i) the partial correctness properties, that is, the orderedness of the output lists, and the equality of the input and output lists when viewed as multisets, and (ii) some arithmetic properties, such as the equality of the sum of the elements before and after sorting.
For the entire collection see [Zbl 1441.68008].Random models for evaluating efficient Büchi universality checkinghttps://www.zbmath.org/1483.681902022-05-16T20:40:13.078697Z"Fisher, Corey"https://www.zbmath.org/authors/?q=ai:fisher.corey"Fogarty, Seth"https://www.zbmath.org/authors/?q=ai:fogarty.seth-j"Vardi, Moshe"https://www.zbmath.org/authors/?q=ai:vardi.moshe-ySummary: Automata-theoretic formal verification approaches the problem of guaranteeing that a program conforms to its specification by reducing conformance to language containment. We can prove conformance by representing both programs and specifications as automata and proving that the specification contains the program. This connection to the theory of automata on infinite words motivated an extensive research program into the algorithmic theory of automata on infinite words, with a focus on algorithms that perform well in practice. The focus on practical performance is important because of the large gap between worst-case complexity and practice for many automata-theoretic algorithms. Unfortunately, there are few benchmark instances of automata in industrial verification. To overcome this challenge, Tabakov and Vardi proposed a model for generating random automata as test cases.
The Tabakov-Vardi (T-V) model, however, is just one random model, based on a specific, rather simple model of random graphs. Other models of random graphs have been studied over the years. While the T-V model has the advantage of simplicity, it is not clear that performance analysis conducted on this model is robust, and an analogous analysis over other random models might yield different conclusions. To address this problem, we introduce three novel models of random automata, yielding automata that are richer in structure than the automata generated by the T-V model. By generating large corpora of random automata and using them to evaluate the performance of universality-checking algorithms, we show that the T-V model is a robust random model for evaluating performance of universality-checking algorithms.
For the entire collection see [Zbl 1416.03005].Bellerophon: tactical theorem proving for hybrid systemshttps://www.zbmath.org/1483.681912022-05-16T20:40:13.078697Z"Fulton, Nathan"https://www.zbmath.org/authors/?q=ai:fulton.nathan"Mitsch, Stefan"https://www.zbmath.org/authors/?q=ai:mitsch.stefan"Bohrer, Brandon"https://www.zbmath.org/authors/?q=ai:bohrer.brandon"Platzer, André"https://www.zbmath.org/authors/?q=ai:platzer.andreSummary: Hybrid systems combine discrete and continuous dynamics, which makes them attractive as models for systems that combine computer control with physical motion. Verification is undecidable for hybrid systems and challenging for many models and properties of practical interest. Thus, human interaction and insight are essential for verification. Interactive theorem provers seek to increase user productivity by allowing them to focus on those insights. We present a tactics language and library for hybrid systems verification, named Bellerophon, that provides a way to convey insights by programming hybrid systems proofs.
We demonstrate that in focusing on the important domain of hybrid systems verification, Bellerophon emerges with unique automation that provides a productive proving experience for hybrid systems from a small foundational prover core in the KeYmaera X prover. Among the automation that emerges are tactics for decomposing hybrid systems, discovering and establishing invariants of nonlinear continuous systems, arithmetic simplifications to maximize the benefit of automated solvers and general-purpose heuristic proof search. Our presentation begins with syntax and semantics for the Bellerophon tactic combinator language, culminating in an example verification effort exploiting Bellerophon's support for invariant and arithmetic reasoning for a non-solvable system.
For the entire collection see [Zbl 1369.68009].How many bits does it take to quantize your neural network?https://www.zbmath.org/1483.681922022-05-16T20:40:13.078697Z"Giacobbe, Mirco"https://www.zbmath.org/authors/?q=ai:giacobbe.mirco"Henzinger, Thomas A."https://www.zbmath.org/authors/?q=ai:henzinger.thomas-a"Lechner, Mathias"https://www.zbmath.org/authors/?q=ai:lechner.mathiasSummary: Quantization converts neural networks into low-bit fixed-point computations which can be carried out by efficient integer-only hardware, and is standard practice for the deployment of neural networks on real-time embedded devices. However, like their real-numbered counterpart, quantized networks are not immune to malicious misclassification caused by adversarial attacks. We investigate how quantization affects a network's robustness to adversarial attacks, which is a formal verification question. We show that neither robustness nor non-robustness are monotonic with changing the number of bits for the representation and, also, neither are preserved by quantization from a real-numbered network. For this reason, we introduce a verification method for quantized neural networks which, using SMT solving over bit-vectors, accounts for their exact, bit-precise semantics. We built a tool and analyzed the effect of quantization on a classifier for the MNIST dataset. We demonstrate that, compared to our method, existing methods for the analysis of real-numbered networks often derive false conclusions about their quantizations, both when determining robustness and when detecting attacks, and that existing methods for quantized networks often miss attacks. Furthermore, we applied our method beyond robustness, showing how the number of bits in quantization enlarges the gender bias of a predictor for students' grades.
For the entire collection see [Zbl 1471.68010].Equilibria for games with combined qualitative and quantitative objectiveshttps://www.zbmath.org/1483.681932022-05-16T20:40:13.078697Z"Gutierrez, Julian"https://www.zbmath.org/authors/?q=ai:gutierrez.julian"Murano, Aniello"https://www.zbmath.org/authors/?q=ai:murano.aniello"Perelli, Giuseppe"https://www.zbmath.org/authors/?q=ai:perelli.giuseppe"Rubin, Sasha"https://www.zbmath.org/authors/?q=ai:rubin.sasha"Steeples, Thomas"https://www.zbmath.org/authors/?q=ai:steeples.thomas"Wooldridge, Michael"https://www.zbmath.org/authors/?q=ai:wooldridge.michael-jSummary: The overall aim of our research is to develop techniques to reason about the equilibrium properties of multi-agent systems. We model multi-agent systems as concurrent games, in which each player is a process that is assumed to act independently and strategically in pursuit of personal preferences. In this article, we study these games in the context of finite-memory strategies, and we assume players' preferences are defined by a qualitative and a quantitative objective, which are related by a lexicographic order: a player first prefers to satisfy its qualitative objective (given as a formula of linear temporal logic) and then prefers to minimise costs (given by a mean-payoff function). Our main result is that deciding the existence of a strict \(\epsilon\) Nash equilibrium in such games is \textsc{2ExpTime}-complete (and hence decidable), even if players' deviations are implemented as infinite-memory strategies.Understanding counterexamples for relational properties with \textit{DIbugger}https://www.zbmath.org/1483.681942022-05-16T20:40:13.078697Z"Herda, Mihai"https://www.zbmath.org/authors/?q=ai:herda.mihai"Kirsten Michael"https://www.zbmath.org/authors/?q=ai:kirsten-michael."Brunner, Etienne"https://www.zbmath.org/authors/?q=ai:brunner.etienne"Plewnia, Joana"https://www.zbmath.org/authors/?q=ai:plewnia.joana"Scheler, Ulla"https://www.zbmath.org/authors/?q=ai:scheler.ulla"Staudenmaier, Chiara"https://www.zbmath.org/authors/?q=ai:staudenmaier.chiara"Wagner, Benedikt"https://www.zbmath.org/authors/?q=ai:wagner.benedikt"Zwick, Pascal"https://www.zbmath.org/authors/?q=ai:zwick.pascal"Beckert, Bernhard"https://www.zbmath.org/authors/?q=ai:beckert.bernhardSummary: Software verification is a tedious process that involves the analysis of multiple failed verification attempts, and adjustments of the program or specification. This is especially the case for complex requirements, e.g., regarding security or fairness, when one needs to compare multiple related runs of the same software. Verification tools often provide counterexamples consisting of program inputs when a proof attempt fails, however it is often not clear why the reported counterexample leads to a violation of the checked property. In this paper, we enhance this aspect of the software verification process by providing \textit{DIbugger}, a tool for analyzing counterexamples of relational properties, allowing the user to debug multiple related programs simultaneously.
For the entire collection see [Zbl 1441.68008].Parametric Markov chains: PCTL complexity and fraction-free Gaussian eliminationhttps://www.zbmath.org/1483.681952022-05-16T20:40:13.078697Z"Hutschenreiter, Lisa"https://www.zbmath.org/authors/?q=ai:hutschenreiter.lisa"Baier, Christel"https://www.zbmath.org/authors/?q=ai:baier.christel"Klein, Joachim"https://www.zbmath.org/authors/?q=ai:klein.joachimSummary: Parametric Markov chains have been introduced as a model for families of stochastic systems that rely on the same graph structure, but differ in the concrete transition probabilities. The latter are specified by polynomial constraints for the parameters. Among the tasks typically addressed in the analysis of parametric Markov chains are (1) the computation of closed-form solutions for reachabilty probabilities and other quantitative measures and (2) finding symbolic representations of the set of parameter valuations for which a given temporal logical formula holds as well as (3) the decision variant of (2) that asks whether there exists a parameter valuation where a temporal logical formula holds. Our contribution to (1) is to show that existing implementations for computing rational functions for reachability probabilities or expected costs in parametric Markov chains can be improved by using fraction-free Gaussian elimination, a long-known technique for linear equation systems with parametric coefficients. Our contribution to (2) and (3) is a complexity-theoretic discussion of the model checking problem for parametric Markov chains and probabilistic computation tree logic (PCTL) formulas. We present an exponential-time algorithm for (2) and a PSPACE upper bound for (3). Moreover, we identify fragments of PCTL and subclasses of parametric Markov chains where (1) and (3) are solvable in polynomial time and establish NP-hardness for other PCTL fragments.
For the entire collection see [Zbl 1436.68017].Fold/unfold transformations for fixpoint logichttps://www.zbmath.org/1483.681962022-05-16T20:40:13.078697Z"Kobayashi, Naoki"https://www.zbmath.org/authors/?q=ai:kobayashi.naoki"Fedyukovich, Grigory"https://www.zbmath.org/authors/?q=ai:fedyukovich.grigory"Gupta, Aarti"https://www.zbmath.org/authors/?q=ai:gupta.aartiSummary: Fixpoint logics have recently been drawing attention as common foundations for automated program verification. We formalize fold/unfold transformations for fixpoint logic formulas and show how they can be used to enhance a recent fixpoint-logic approach to automated program verification, including automated verification of relational and temporal properties. We have implemented the transformations in a tool and confirmed its effectiveness through experiments.
For the entire collection see [Zbl 1471.68010].Multi-player equilibria verification for concurrent stochastic gameshttps://www.zbmath.org/1483.681972022-05-16T20:40:13.078697Z"Kwiatkowska, Marta"https://www.zbmath.org/authors/?q=ai:kwiatkowska.marta-z"Norman, Gethin"https://www.zbmath.org/authors/?q=ai:norman.gethin"Parker, David"https://www.zbmath.org/authors/?q=ai:parker.david-j|parker.david-f"Santos, Gabriel"https://www.zbmath.org/authors/?q=ai:santos.gabriel-pinheiro-alvesSummary: Concurrent stochastic games (CSGs) are an ideal formalism for modelling probabilistic systems that feature multiple players or components with distinct objectives making concurrent, rational decisions. Examples include communication or security protocols and multi-robot navigation. Verification methods for CSGs exist but are limited to scenarios where agents or players are grouped into two \textit{coalitions}, with those in the same coalition sharing an identical objective. In this paper, we propose \textit{multi-coalitional} verification techniques for CSGs. We use subgame-perfect social welfare (or social cost) optimal Nash equilibria, which are strategies where there is no incentive for any coalition to unilaterally change its strategy in any game state, and where the total combined objectives are maximised (or minimised). We present an extension of the temporal logic rPATL (probabilistic alternating-time temporal logic with rewards) to specify equilibria-based properties for any number of distinct coalitions, and a corresponding model checking algorithm for a variant of stopping games. We implement our techniques in the PRISM-games tool and apply them to several case studies, including a secret sharing protocol and a public good game.
For the entire collection see [Zbl 1475.68022].The descriptive complexity of modal \(\mu\) model-checking gameshttps://www.zbmath.org/1483.681982022-05-16T20:40:13.078697Z"Lehtinen, Karoliina"https://www.zbmath.org/authors/?q=ai:lehtinen.karoliinaSummary: This paper revisits the well-established relationship between the modal \(\mu\) calculus \(L_\mu\) and parity games to show that it is even more robust than previously known. It addresses the question of whether the descriptive complexity of modal \(L_\mu\) model-checking games, previously known to depend on the syntactic complexity of a formula, depends in fact on its semantic complexity. It shows that up to formulas of semantic complexity \(\Sigma_2^\mu\), the descriptive complexity of their model-checking games coincides exactly with their semantic complexity. Beyond \(\Sigma_2^\mu\), the descriptive complexity of the model-checking parity games of a formula \(\Psi\) is shown to be an upper bound on its semantic complexity \(\Psi\); whether it is also a lower bound remains an open question.
For the entire collection see [Zbl 1436.68017].Using unified model checking to verify heapshttps://www.zbmath.org/1483.681992022-05-16T20:40:13.078697Z"Lu, Xu"https://www.zbmath.org/authors/?q=ai:lu.xu"Duan, Zhenhua"https://www.zbmath.org/authors/?q=ai:duan.zhenhua"Tian, Cong"https://www.zbmath.org/authors/?q=ai:tian.congSummary: This paper addresses the problem of verifying heap evolution properties of pointer programs. To this end, a new unified model checking approach with MSVL (modeling, simulation and verification language) and \(\mathrm{PPTL}^\mathrm{SL}\) is presented. The former is an executable subset of PTL (projection temporal logic) while the latter is an extension of PPTL (propositional projection temporal logic) with separation logic. MSVL is used to model pointer programs, and \(\mathrm{PPTL}^\mathrm{SL}\) to specify heap evolution properties. In addition, we implement a prototype in order to demonstrate our approach.
For the entire collection see [Zbl 1377.68004].What's decidable about program verification modulo axioms?https://www.zbmath.org/1483.682002022-05-16T20:40:13.078697Z"Mathur, Umang"https://www.zbmath.org/authors/?q=ai:mathur.umang"Madhusudan, P."https://www.zbmath.org/authors/?q=ai:madhusudan.parthasarathy"Viswanathan, Mahesh"https://www.zbmath.org/authors/?q=ai:viswanathan.maheshSummary: We consider the decidability of the verification problem of programs \textit{modulo axioms} -- automatically verifying whether programs satisfy their assertions, when the function and relation symbols are interpreted as arbitrary functions and relations that satisfy a set of first-order axioms. Though verification of uninterpreted programs (with no axioms) is already undecidable, a recent work introduced a subclass of \textit{coherent} uninterpreted programs, and showed that they admit decidable verification
[the authors, ``Decidable verification of uninterpreted programs'', Proc. ACM Program. Lang. 3, No. POPL, Article No. 46, 29 p. (2019; \url{doi:10.1145/3290359})].
We undertake a systematic study of various natural axioms for relations and functions, and study the decidability of the coherent verification problem. Axioms include relations being reflexive, symmetric, transitive, or total order relations, functions restricted to being associative, idempotent or commutative, and combinations of such axioms as well. Our comprehensive results unearth a rich landscape that shows that though several axiom classes admit decidability for coherent programs, coherence is not a panacea as several others continue to be undecidable.
For the entire collection see [Zbl 1471.68010].Partial-order reduction for parity games with an application on parameterised Boolean equation systemshttps://www.zbmath.org/1483.682012022-05-16T20:40:13.078697Z"Neele, Thomas"https://www.zbmath.org/authors/?q=ai:neele.thomas"Willemse, Tim A. C."https://www.zbmath.org/authors/?q=ai:willemse.tim-a-c"Wesselink, Wieger"https://www.zbmath.org/authors/?q=ai:wesselink.wiegerSummary: Partial-order reduction (POR) is a well-established technique to combat the problem of state-space explosion. We propose POR techniques that are sound for parity games, a well-established formalism for solving a variety of decision problems. As a consequence, we obtain the first POR method that is sound for model checking for the full modal \(\mu \)-calculus. Our technique is applied to, and implemented for the fixed point logic called \textit{parameterised Boolean equation systems}, which provides a high-level representation of parity games. Experiments indicate that substantial reductions can be achieved.
For the entire collection see [Zbl 1471.68010].Model checking social network modelshttps://www.zbmath.org/1483.682022022-05-16T20:40:13.078697Z"Pardo, Raúl"https://www.zbmath.org/authors/?q=ai:pardo.raul"Schneider, Gerardo"https://www.zbmath.org/authors/?q=ai:schneider.gerardoSummary: A \textit{social network service} is a platform to build social relations among people sharing similar interests and activities. The underlying structure of a social networks service is the \textit{social graph}, where nodes represent users and the arcs represent the users' social links and other kind of connections. One important concern in social networks is \textit{privacy}: what others are (not) allowed to \textit{know} about us. The ``logic of knowledge'' (\textit{epistemic logic}) is thus a good formalism to define, and reason about, privacy policies. In this paper we consider the problem of verifying knowledge properties over \textit{social network models} (SNMs), that is social graphs enriched with \textit{knowledge bases} containing the information that the users know. More concretely, our contributions are: i) We prove that the model checking problem for epistemic properties over SNMs is decidable; ii) We prove that a number of properties of knowledge that are sound w.r.t. Kripke models are also sound w.r.t. SNMs; iii) We give a satisfaction-preserving encoding of SNMs into \textit{canonical} Kripke models, and we also characterise which Kripke models may be translated into SNMs; iv) We show that, for SNMs, the model checking problem is cheaper than the one based on standard Kripke models. Finally, we have developed a proof-of-concept implementation of the model-checking algorithm for SNMs.
For the entire collection see [Zbl 1436.68017].Verifying concurrent graph algorithmshttps://www.zbmath.org/1483.682032022-05-16T20:40:13.078697Z"Raad, Azalea"https://www.zbmath.org/authors/?q=ai:raad.azalea"Hobor, Aquinas"https://www.zbmath.org/authors/?q=ai:hobor.aquinas"Villard, Jules"https://www.zbmath.org/authors/?q=ai:villard.jules"Gardner, Philippa"https://www.zbmath.org/authors/?q=ai:gardner.philippa-anneSummary: We show how to verify four challenging concurrent fine-grained graph-manipulating algorithms, including graph copy, a speculatively parallel Dijkstra, graph marking and spanning tree. We develop a reasoning method for such algorithms that dynamically tracks the contributions and responsibilities of each thread operating on a graph, even in cases of arbitrary recursive thread creation. We demonstrate how to use a logic without abstraction (CoLoSL) to carry out abstract reasoning in the style of iCAP, by building the abstraction into the proof structure rather than incorporating it into the semantic model of the logic.
For the entire collection see [Zbl 1347.68009].Bayesian inference by symbolic model checkinghttps://www.zbmath.org/1483.682042022-05-16T20:40:13.078697Z"Salmani, Bahare"https://www.zbmath.org/authors/?q=ai:salmani.bahare"Katoen, Joost-Pieter"https://www.zbmath.org/authors/?q=ai:katoen.joost-pieterSummary: This paper applies probabilistic model checking techniques for discrete Markov chains to inference in Bayesian networks. We present a simple translation from Bayesian networks into tree-like Markov chains such that inference can be reduced to computing reachability probabilities. Using a prototypical implementation on top of the Storm model checker, we show that symbolic data structures such as multi-terminal BDDs (MTBDDs) are very effective to perform inference on large Bayesian network benchmarks. We compare our result to inference using probabilistic sentential decision diagrams and vtrees, a scalable symbolic technique in AI inference tools.
For the entire collection see [Zbl 1475.68022].ParaPlan: a tool for parallel reachability analysis of planar polygonal differential inclusion systemshttps://www.zbmath.org/1483.682052022-05-16T20:40:13.078697Z"Sandler, Andrei"https://www.zbmath.org/authors/?q=ai:sandler.andrei"Tveretina, Olga"https://www.zbmath.org/authors/?q=ai:tveretina.olgaSummary: We present the ParaPlan tool which provides the reachability analysis of planar hybrid systems defined by differential inclusions (SPDI). It uses the parallelized and optimized version of the algorithm underlying the SPeeDI tool
[\textit{E. Asarin} et al., Lect. Notes Comput. Sci. 2404, 354--358 (2002; Zbl 1010.68791)].
The performance comparison demonstrates the speed-up of up to 83 times with respect to the sequential implementation on various benchmarks. Some of the benchmarks we used are randomly generated with the novel approach based on the partitioning of the plane with Voronoi diagrams.
For the entire collection see [Zbl 1436.68017].Higher-order model checking in direct stylehttps://www.zbmath.org/1483.682062022-05-16T20:40:13.078697Z"Terao, Taku"https://www.zbmath.org/authors/?q=ai:terao.taku"Tsukada, Takeshi"https://www.zbmath.org/authors/?q=ai:tsukada.takeshi"Kobayashi, Naoki"https://www.zbmath.org/authors/?q=ai:kobayashi.naokiSummary: Higher-order model checking, or model checking of higher-order recursion schemes, has been recently applied to fully automated verification of functional programs. The previous approach has been indirect, in the sense that higher-order functional programs are first abstracted to (call-by-value) higher-order Boolean programs, and then further translated to higher-order recursion schemes (which are essentially call-by-name programs) and model checked. These multi-step transformations caused a number of problems such as code explosion. In this paper, we advocate a more direct approach, where higher-order Boolean programs are directly model checked, without transformation to higher-order recursion schemes. To this end, we develop a model checking algorithm for higher-order call-by-value Boolean programs, and prove its correctness. According to experiments, our prototype implementation outperforms the indirect method for large instances.
For the entire collection see [Zbl 1347.68009].Linear-time temporal logic with event freezing functionshttps://www.zbmath.org/1483.682072022-05-16T20:40:13.078697Z"Tonetta, Stefano"https://www.zbmath.org/authors/?q=ai:tonetta.stefanoSummary: Formal properties represent a cornerstone of the system-correctness proofs based on formal verification techniques such as model checking. Formalizing requirements into temporal properties may be very complex and error prone, due not only to the ambiguity of the textual requirements but also to the complexity of the formal language. Finding a property specification language that balances simplicity, expressiveness, and tool support remains an open problem in many real-world contexts.
In this paper, we propose a new temporal logic, which extends First-Order Linear-time Temporal Logic with Past adding two operators ``at next'' and ``at last'', which take in input a term and a formula and represent the value of the term at the next state in the future or last state in the past in which the formula holds. We consider different models of time (including discrete, dense, and super-dense time) and Satisfiability Modulo Theories (SMT) of the first-order formulas. The ``at next'' and ``at last'' functions can be seen as a generalization of Event-Clock operators and can encode some Metric Temporal operators also with counting. They are useful to formalize properties of component-based models because they allow to express constraints on the data exchanged with messages at different instants of time. We provide a simple encoding into equisatisfiable formulas without the extra functional symbols. We implement a prototype tool support based on SMT-based model checking.
For the entire collection see [Zbl 1436.68017].Configurable verification of timed automata with discrete variableshttps://www.zbmath.org/1483.682082022-05-16T20:40:13.078697Z"Tóth, Tamás"https://www.zbmath.org/authors/?q=ai:toth.tamas"Majzik, István"https://www.zbmath.org/authors/?q=ai:majzik.istvanSummary: Algorithms and protocols with time dependent behavior are often specified formally using timed automata. For practical real-time systems, besides real-valued clock variables, these specifications typically contain discrete data variables with nontrivial data flow. In this paper, we propose a configurable lazy abstraction framework for the location reachability problem of timed automata that potentially contain discrete variables. Moreover, based on our previous work, we uniformly formalize in our framework several abstraction refinement strategies for both clock and discrete variables that can be freely combined, resulting in many distinct algorithm configurations. Besides the proposed refinement strategies, the configurability of the framework allows the integration of existing efficient lazy abstraction algorithms for clock variables based on \({LU}\)-bounds. We demonstrate the applicability of the framework and the proposed refinement strategies by an empirical evaluation on a wide range of timed automata models, including ones that contain discrete variables or diagonal constraints.Optimizing epistemic model checking using conditional independence (extended abstract)https://www.zbmath.org/1483.682092022-05-16T20:40:13.078697Z"van der Meyden, Ron"https://www.zbmath.org/authors/?q=ai:van-der-meyden.ronSummary: This paper shows that conditional independence reasoning can be applied to optimize epistemic model checking, in which one verifies that a model for a number of agents operating with imperfect information satisfies a formula expressed in a modal multi-agent logic of knowledge. The optimization has been implemented in the epistemic model checker MCK. The paper reports experimental results demonstrating that it can yield multiple orders of magnitude performance improvements.
For the entire collection see [Zbl 1446.68014].Undecidable cases of model checking probabilistic temporal-epistemic logic (extended abstract)https://www.zbmath.org/1483.682102022-05-16T20:40:13.078697Z"van der Meyden, Ron"https://www.zbmath.org/authors/?q=ai:van-der-meyden.ron"Patra, Manas K."https://www.zbmath.org/authors/?q=ai:patra.manas-kSummary: We investigate the decidability of model-checking logics of time, knowledge and probability, with respect to two epistemic semantics: the clock and synchronous perfect recall semantics in partially observed discrete-time Markov chains. Decidability results are known for certain restricted logics with respect to these semantics, subject to a variety of restrictions that are either unexplained or involve a longstanding unsolved mathematical problem. We show that mild generalizations of the known decidable cases suffice to render the model checking problem definitively undecidable. In particular, for a synchronous perfect recall, a generalization from temporal operators with finite reach to operators with infinite reach renders model checking undecidable. The case of the clock semantics is closely related to a monadic second order logic of time and probability that is known to be decidable, except on a set of measure zero. We show that two distinct extensions of this logic make model checking undecidable. One of these involves polynomial combinations of probability terms, the other involves monadic second order quantification into the scope of probability operators. These results explain some of the restrictions in previous work.
For the entire collection see [Zbl 1446.68019].Highly automated formal proofs over memory usage of assembly codehttps://www.zbmath.org/1483.682112022-05-16T20:40:13.078697Z"Verbeek, Freek"https://www.zbmath.org/authors/?q=ai:verbeek.freek"Bockenek, Joshua A."https://www.zbmath.org/authors/?q=ai:bockenek.joshua-a"Ravindran, Binoy"https://www.zbmath.org/authors/?q=ai:ravindran.binoySummary: We present a methodology for generating a characterization of the memory used by an assembly program, as well as a formal proof that the assembly is bounded to the generated memory regions. A formal proof of memory usage is required for compositional reasoning over assembly programs. Moreover, it can be used to prove low-level security properties, such as integrity of the return address of a function. Our verification method is based on interactive theorem proving, but provides automation by generating pre- and postconditions, invariants, control-flow, and assumptions on memory layout. As a case study, three binaries of the Xen hypervisor are disassembled. These binaries are the result of a complex build-chain compiling production code, and contain various complex and nested loops, large and compound data structures, and functions with over 100 basic blocks. The methodology has been successfully applied to 251 functions, covering 12,252 assembly instructions.
For the entire collection see [Zbl 1471.68010].Model checking QCTL plus on quantum Markov chainshttps://www.zbmath.org/1483.682122022-05-16T20:40:13.078697Z"Xu, Ming"https://www.zbmath.org/authors/?q=ai:xu.ming"Fu, Jianling"https://www.zbmath.org/authors/?q=ai:fu.jianling"Mei, Jingyi"https://www.zbmath.org/authors/?q=ai:mei.jingyi"Deng, Yuxin"https://www.zbmath.org/authors/?q=ai:deng.yuxinSummary: Verifying temporal properties of quantum systems, including quantum Markov chains (QMCs), has attracted an increasing interest in the last decade. Typically, the properties are specified by quantum computation tree logic (QCTL), in which reachability analysis plays a central role. However, safety as the dual problem is known little. Motivated by this, we propose a more expressive logic -- QCTL\(^+\) (QCTL plus), which extends QCTL by allowing the conjunction in path formulas and the negation in the top level of path formulas. The former can be adopted to express conditional events, and the latter can express safety. To deal with conjunction, we present a product construction of classical states in the QMC and the tri-valued truths of atomic path formulas; to deal with negation, we develop an algebraic approach to compute the safety of the bottom strongly connected component subspaces with respect to a super-operator under some necessary and sufficient convergence conditions. Thereby we conditionally decide QCTL\(^+\) formulas over QMCs; without the convergence conditions the safety problem still remains open. The complexity of our method is provided in terms of the size of both the input QMC and the QCTL\(^+\) formula.Verification of higher-order concurrent programs with dynamic resource creationhttps://www.zbmath.org/1483.682132022-05-16T20:40:13.078697Z"Yasukata, Kazuhide"https://www.zbmath.org/authors/?q=ai:yasukata.kazuhide"Tsukada, Takeshi"https://www.zbmath.org/authors/?q=ai:tsukada.takeshi"Kobayashi, Naoki"https://www.zbmath.org/authors/?q=ai:kobayashi.naokiSummary: We propose a sound and complete static verification method for (higher-order) concurrent programs with dynamic creation of resources, such as locks and thread identifiers. To deal with (possibly infinite) resource creation, we prepare a finite set of abstract resource names and introduce the notion of scope-safety as a sufficient condition for avoiding the confusion of different concrete resources mapped to the same abstract name. We say that a program is scope-safe if no resource is used after the creation of another resource of the same abstract name. We prove that the pairwise-reachability problem is decidable for scope-safe programs with nested locking. We also propose a method for checking that a given program is scope-safe and with nested locking.
For the entire collection see [Zbl 1347.68009].Formal verification for reliability and performance on core coordination of MPSoChttps://www.zbmath.org/1483.682142022-05-16T20:40:13.078697Z"Zhang, Hui"https://www.zbmath.org/authors/?q=ai:zhang.hui.11|zhang.hui.3|zhang.hui.5|zhang.hui.8|zhang.hui.2|zhang.hui.10|zhang.hui.7|zhang.hui.1|zhang.hui.6|zhang.hui.4|zhang.hui|zhang.hui.9"Wu, Jin Zhao"https://www.zbmath.org/authors/?q=ai:wu.jinzhao"Xie, Ying"https://www.zbmath.org/authors/?q=ai:xie.ying.1|xie.ying"Cao, Jun Yue"https://www.zbmath.org/authors/?q=ai:cao.junyue(no abstract)Formal co-verification for the correctness and timing requirements of SoC designhttps://www.zbmath.org/1483.682152022-05-16T20:40:13.078697Z"Zhan, Jin-yu"https://www.zbmath.org/authors/?q=ai:zhan.jinyu"Xiong, Guang-ze"https://www.zbmath.org/authors/?q=ai:xiong.guangze"Sang, Nan"https://www.zbmath.org/authors/?q=ai:sang.nan(no abstract)Relational verification via invariant-guided synchronizationhttps://www.zbmath.org/1483.682162022-05-16T20:40:13.078697Z"Zhou, Qi"https://www.zbmath.org/authors/?q=ai:zhou.qi.1|zhou.qi.2|zhou.qi"Heath, David"https://www.zbmath.org/authors/?q=ai:heath.david-g"Harris, William"https://www.zbmath.org/authors/?q=ai:harris.william-r|harris.william-f.1|harris.william-s|harris.william-a-junSummary: Relational properties describe relationships that hold over multiple executions of one or more programs, such as functional equivalence. Conventional approaches for automatically verifying such properties typically rely on syntax-based, heuristic strategies for finding \textit{synchronization points} among the input programs. These synchronization points are then annotated with appropriate relational invariants to complete the proof. However, when suboptimal synchronization points are chosen the required invariants can be complicated or even inexpressible in the target theory.
In this work, we propose a novel approach to verifying relational properties. This approach searches for synchronization points and synthesizes relational invariants \textit{simultaneously}. Specifically, the approach uses synthesized invariants as a guide for finding proper synchronization points that lead to a complete proof. We implemented our approach as a tool named \textsc{Pequod}, which targets Java Virtual Machine (JVM) bytecode. We evaluated \textsc{Pequod} by using it to solve verification challenges drawn from the from the research literature and by verifying properties of student-submitted solutions to online challenge problems. The results show that \textsc{Pequod} solve verification problems that cannot be addressed by current techniques.
For the entire collection see [Zbl 1441.68008].Generalized results on monoids as memoryhttps://www.zbmath.org/1483.682172022-05-16T20:40:13.078697Z"Salehi, Özlem"https://www.zbmath.org/authors/?q=ai:salehi.ozlem"D'Alessandro, Flavio"https://www.zbmath.org/authors/?q=ai:dalessandro.flavio"Cem Say, A. C."https://www.zbmath.org/authors/?q=ai:say.a-c-cemSummary: We show that some results from the theory of group automata and monoid automata still hold for more general classes of monoids and models. Extending previous work for finite automata over commutative groups, we prove that the context-free language \(\mathtt{L}_1^*=\{a^nb^n : n\geq1\}^*\) can not be recognized by any rational monoid automaton over a finitely generated permutable monoid. We show that the class of languages recognized by rational monoid automata over finitely generated completely simple or completely 0-simple permutable monoids is a semi-linear full trio. Furthermore, we investigate valence pushdown automata, and prove that they are only as powerful as (finite) valence automata. We observe that certain results proven for monoid automata can be easily lifted to the case of context-free valence grammars.
For the entire collection see [Zbl 1433.68013].A symbiosis between cellular automata and genetic algorithmshttps://www.zbmath.org/1483.682182022-05-16T20:40:13.078697Z"Cerruti, Umberto"https://www.zbmath.org/authors/?q=ai:cerruti.umberto"Dutto, Simone"https://www.zbmath.org/authors/?q=ai:dutto.simone"Murru, Nadir"https://www.zbmath.org/authors/?q=ai:murru.nadirSummary: Cellular automata are systems which use a rule to describe the evolution of a population in a discrete lattice, while genetic algorithms are procedures designed to find solutions to optimization problems inspired by the process of natural selection. In this paper, we introduce an original implementation of a cellular automaton whose rules use a fitness function to select for each cell the best mate to reproduce and a crossover operator to determine the resulting offspring. This new system, with a proper definition, can be both a cellular automaton and a genetic algorithm. We show that in our system the Conway's Game of Life can be easily implemented and, consequently, it is capable of universal computing. Moreover two generalizations of the Game of Life are created and also implemented with it. Finally, we use our system for studying and implementing the prisoner's dilemma and rock-paper-scissors games, showing very interesting behaviors and configurations (e.g., gliders) inside these games.Deciding the bisimilarity of context-free session typeshttps://www.zbmath.org/1483.682192022-05-16T20:40:13.078697Z"Almeida, Bernardo"https://www.zbmath.org/authors/?q=ai:almeida.bernardo-f"Mordido, Andreia"https://www.zbmath.org/authors/?q=ai:mordido.andreia"Vasconcelos, Vasco T."https://www.zbmath.org/authors/?q=ai:vasconcelos.vasco-thudichumSummary: We present an algorithm to decide the equivalence of context-free session types, practical to the point of being incorporated in a compiler. We prove its soundness and completeness. We further evaluate its behaviour in practice. In the process, we introduce an algorithm to decide the bisimilarity of simple grammars.
For the entire collection see [Zbl 1471.68010].Dynamics of reputation in mobile agents systems and weighted timed automatahttps://www.zbmath.org/1483.682202022-05-16T20:40:13.078697Z"Aman, Bogdan"https://www.zbmath.org/authors/?q=ai:aman.bogdan"Ciobanu, Gabriel"https://www.zbmath.org/authors/?q=ai:ciobanu.gabrielSummary: The reputation of a process is based on its past and present behaviour; it evolves in time depending on several factors including the actions performed, the reputation of its interacting processes and locations where the process resides. We design a calculus of mobile agents in distributed systems able to handle the dynamics of reputation which changes over time depending on specified factors. The main idea is to provide a formal understanding of how reputation is formed from complex interactions in agent systems based on timed migration and communication, and how we can verify automatically the properties involving reputation. After presenting the operational semantics of our calculus and some results, the mobile agents described easily in this calculus are translated into networks of weighted timed automata having a bisimilar behaviour. Based on this translation, we use an existing software platform to analyze reputation-based properties of mobile agents in distributed systems.Sequential composition in the presence of intermediate termination (extended abstract)https://www.zbmath.org/1483.682212022-05-16T20:40:13.078697Z"Baeten, Jos"https://www.zbmath.org/authors/?q=ai:baeten.jos-c-m"Luttik, Bas"https://www.zbmath.org/authors/?q=ai:luttik.bas"Yang, Fei"https://www.zbmath.org/authors/?q=ai:yang.fei.5Summary: The standard operational semantics of the sequential composition operator gives rise to unbounded branching and forgetfulness when transparent process expressions are put in sequence. Due to transparency, the correspondence between context-free and pushdown processes fails modulo bisimilarity, and it is not clear how to specify an always terminating half counter. We propose a revised operational semantics for the sequential composition operator in the context of intermediate termination. With the revised operational semantics, we eliminate transparency, allowing us to establish a close correspondence between context-free processes and pushdown processes. Moreover, we prove the reactive Turing powerfulness of TCP with iteration and nesting with the revised operational semantics for sequential composition.
For the entire collection see [Zbl 1433.68033].Behavioural logics for configuration structureshttps://www.zbmath.org/1483.682222022-05-16T20:40:13.078697Z"Baldan, Paolo"https://www.zbmath.org/authors/?q=ai:baldan.paolo"Gorla, Daniele"https://www.zbmath.org/authors/?q=ai:gorla.daniele"Padoan, Tommaso"https://www.zbmath.org/authors/?q=ai:padoan.tommaso"Salvo, Ivano"https://www.zbmath.org/authors/?q=ai:salvo.ivanoSummary: We provide a behavioural logic for configuration structures, a model due to van Glabbeek and Plotkin which generalises the families of (finite) configurations of event structures. The logic is a conservative extension of a logic provided by Baldan and Crafa for prime event structures. We show that logical equivalence can be characterized as a form of hereditary history preserving bisimilarity. We compare such a notion of bisimilarity with an equivalence proposed by van Glabbeek in the setting of higher-dimensional automata, showing that, in general, it is finer, while the two notions coincide in the framework of general event structures. Finally, we explore how to restrict the general logic to capture a notion of history preserving bisimilarity.Sound reasoning in \textit{tock}-CSPhttps://www.zbmath.org/1483.682232022-05-16T20:40:13.078697Z"Baxter, James"https://www.zbmath.org/authors/?q=ai:baxter.james"Ribeiro, Pedro"https://www.zbmath.org/authors/?q=ai:ribeiro.pedro"Cavalcanti, Ana"https://www.zbmath.org/authors/?q=ai:cavalcanti.anaSummary: Specifying budgets and deadlines using a process algebra like CSP requires an explicit notion of time. The \textit{tock}-CSP encoding embeds a rich and flexible approach for modelling discrete-time behaviours with powerful tool support. It uses an event \textit{tock}, interpreted to mark passage of time. Analysis, however, has traditionally used the standard semantics of CSP, which is inadequate for reasoning about timed refinement. The most recent version of the model checker FDR provides tailored support for \textit{tock}-CSP, including specific operators, but the standard semantics remains inadequate. In this paper, we characterise \textit{tock}-CSP as a language in its own right, rich enough to model budgets and deadlines, and reason about Zeno behaviour. We present the first sound tailored semantic model for \textit{tock}-CSP that captures timewise refinement. It is fully mechanised in Isabelle/HOL and, to enable use of FDR4 to check refinement in this novel model, we use model shifting, which is a technique that explicitly encodes refusals in traces.Alternative characterizations of probabilistic trace equivalences on coherent resolutions of nondeterminismhttps://www.zbmath.org/1483.682242022-05-16T20:40:13.078697Z"Bernardo, Marco"https://www.zbmath.org/authors/?q=ai:bernardo.marcoSummary: For nondeterministic and probabilistic processes, the validity of some desirable properties of probabilistic trace semantics depends both on the class of schedulers used to resolve nondeterminism and on the capability of suitably limiting the power of the considered schedulers. Inclusion of probabilistic bisimilarity, compositionality with respect to typical process operators, and backward compatibility with trace semantics over fully nondeterministic or fully probabilistic processes, can all be achieved by restricting to coherent resolutions of nondeterminism. Here we provide alternative characterizations of probabilistic trace post-equivalence and pre-equivalence in the case of coherent resolutions. The characterization of the former is based on fully coherent trace distributions, whereas the characterization of the latter relies on coherent weighted trace sets.
For the entire collection see [Zbl 1475.68022].A new method for business process verification based on Petri-nethttps://www.zbmath.org/1483.682252022-05-16T20:40:13.078697Z"Chang, Guan Yu"https://www.zbmath.org/authors/?q=ai:chang.guanyu"Yang, Hai Cheng"https://www.zbmath.org/authors/?q=ai:yang.haicheng"Mo, Rong"https://www.zbmath.org/authors/?q=ai:mo.rong(no abstract)Eager functions as processeshttps://www.zbmath.org/1483.682262022-05-16T20:40:13.078697Z"Durier, Adrien"https://www.zbmath.org/authors/?q=ai:durier.adrien"Hirschkoff, Daniel"https://www.zbmath.org/authors/?q=ai:hirschkoff.daniel"Sangiorgi, Davide"https://www.zbmath.org/authors/?q=ai:sangiorgi.davideSummary: We study Milner's encoding of the call-by-value \(\lambda \)-calculus into the \(\pi \)-calculus. We show that, by tuning the encoding to two subcalculi of the \(\pi \)-calculus (Internal \(\pi\) and Asynchronous Local \(\pi )\), the equivalence on \(\lambda \)-terms induced by the encoding coincides with Lassen's eager normal-form bisimilarity, extended to handle \(\eta \)-equality. As behavioural equivalence in the \(\pi \)-calculus we consider contextual equivalence and barbed congruence. We also extend the results to preorders.
A crucial technical ingredient in the proofs is the recently-introduced technique of unique solutions of equations, further developed in this paper. In this respect, the paper also intends to be an extended case study on the applicability and expressiveness of the technique.Analysing mutual exclusion using process algebra with signalshttps://www.zbmath.org/1483.682272022-05-16T20:40:13.078697Z"Dyseryn, Victor"https://www.zbmath.org/authors/?q=ai:dyseryn.victor"van Glabbeek, Rob"https://www.zbmath.org/authors/?q=ai:van-glabbeek.robert-jan"Höfner, Peter"https://www.zbmath.org/authors/?q=ai:hofner.peterSummary: In contrast to common belief, the Calculus of Communicating Systems (CCS) and similar process algebras lack the expressive power to accurately capture mutual exclusion protocols without enriching the language with fairness assumptions. Adding a fairness assumption to implement a mutual exclusion protocol seems counter-intuitive. We employ a signalling operator, which can be combined with CCS, or other process calculi, and show that this minimal extension is expressive enough to model mutual exclusion: we confirm the correctness of Peterson's mutual exclusion algorithm for two processes, as well as Lamport's bakery algorithm, under reasonable assumptions on the underlying memory model. The correctness of Peterson's algorithm for more than two processes requires stronger, less realistic assumptions on the underlying memory model.
For the entire collection see [Zbl 1433.68033].Bisimulation and Hennessy-Milner logic for generalized synchronization treeshttps://www.zbmath.org/1483.682282022-05-16T20:40:13.078697Z"Ferlez, James"https://www.zbmath.org/authors/?q=ai:ferlez.james"Cleaveland, Rance"https://www.zbmath.org/authors/?q=ai:cleaveland.rance"Marcus, Steve"https://www.zbmath.org/authors/?q=ai:marcus.steveSummary: In this work, we develop a generalization of Hennessy-Milner Logic (HML) for Generalized Synchronization Trees (GSTs) that we call Generalized Hennessy Milner Logic (GHML). Importantly, this logic suggests a strong relationship between (weak) bisimulation for GSTs and ordinary bisimulation for Synchronization Trees (STs). We demonstrate that this relationship can be used to define the GST analog for image-finiteness of STs. Furthermore, we demonstrate that certain maximal Hennessy-Milner classes of STs have counterparts in maximal Hennessy-Milner classes of GSTs with respect to GST weak bisimulation. We also exhibit some interesting characteristics of these maximal Hennessy-Milner classes of GSTs.
For the entire collection see [Zbl 1433.68033].The ideal theory for WSTShttps://www.zbmath.org/1483.682292022-05-16T20:40:13.078697Z"Finkel, Alain"https://www.zbmath.org/authors/?q=ai:finkel.alainSummary: We begin with a survey on well structured transition systems and, in particular, we present the ideal framework
[\textit{A. Finkel} and \textit{J. Goubault-Larrecq}, LIPIcs -- Leibniz Int. Proc. Inform. 3, 433--444 (2009; Zbl 1236.68183);
\textit{M. Blondin} et al., Lect. Notes Comput. Sci. 8573, 13--25 (2014; Zbl 1382.68149)]
which was recently used to obtain new deep results on Petri nets and extensions. We argue that the theory of ideals prompts a renewal of the theory of WSTS by providing a way to define a new class of monotonic systems, the so-called well behaved transition systems, which properly contains WSTS, and for which coverability is still decidable by a forward algorithm. We then recall the completion of WSTS which leads to defining a conceptual Karp-Miller procedure that terminates in more cases than the generalized Karp-Miller procedure on extensions of Petri nets.
For the entire collection see [Zbl 1371.68012].A quantitative study of fork-join processes with non-deterministic choice: application to the statistical exploration of the state-spacehttps://www.zbmath.org/1483.682302022-05-16T20:40:13.078697Z"Genitrini, Antoine"https://www.zbmath.org/authors/?q=ai:genitrini.antoine"Pépin, Martin"https://www.zbmath.org/authors/?q=ai:pepin.martin"Peschanski, Frédéric"https://www.zbmath.org/authors/?q=ai:peschanski.fredericSummary: We study concurrent processes with non-deterministic choice, loops and a fork-join style of coordination under the lens of combinatorics. As a starting point, we interpret these processes as combinatorial structures. We propose a framework, based on analytic combinatorics, allowing to analyse their quantitative aspects such as the average number of execution paths induced by the choice operator, or the proportion of executions of a process with respect to its number of execution prefixes. Building on this theoretical investigation, we develop efficient algorithms aimed at the statistical exploration of their state-space. The first algorithm is a uniform random sampler of bounded executions, providing a good default exploration strategy. The second algorithm is a uniform random sampler of execution prefixes of a given bounded length, allowing to bias the exploration in a controlled manner. The fundamental characteristics of these algorithms is that they work on the syntax of the programs and do not require the explicit construction of the state-space.Binary session types for psi-calculihttps://www.zbmath.org/1483.682312022-05-16T20:40:13.078697Z"Hüttel, Hans"https://www.zbmath.org/authors/?q=ai:huttel.hansSummary: Binary session types can be used to describe communication protocols, and to ensure a variety of properties, e.g. deadlock freedom, liveness, or secure information flow. Session type systems are often formulated for variants of the \(\pi\)-calculus, and for each such system, the central properties such as session fidelity must be re-established.
The framework of psi-calculi introduced by Bengtson et al. makes it possible to give a general account of variants of the pi-calculus. We use this framework to describe a generic session type system for variants of the \(\pi\)-calculus. In this generic system, standard properties, including fidelity, hold at the level of the framework and are then guaranteed to hold when the generic system is instantiated.
We show that our system can capture existing systems including the session type system due to Gay and Hole, a type system for progress due to Vieira and Vasconcelos and a refinement type system due to Baltazar et al. The standard fidelity property is proved at the level of the generic system, and automatically hold when the system is instantiated.
For the entire collection see [Zbl 1347.68009].An \(O(m \log n)\) algorithm for branching bisimilarity on labelled transition systemshttps://www.zbmath.org/1483.682322022-05-16T20:40:13.078697Z"Jansen, David N."https://www.zbmath.org/authors/?q=ai:jansen.david-n"Groote, Jan Friso"https://www.zbmath.org/authors/?q=ai:groote.jan-friso"Keiren, Jeroen J. A."https://www.zbmath.org/authors/?q=ai:keiren.jeroen-j-a"Wijs, Anton"https://www.zbmath.org/authors/?q=ai:wijs.anton-jSummary: Branching bisimilarity is a behavioural equivalence relation on labelled transition systems (LTSs) that takes internal actions into account. It has the traditional advantage that algorithms for branching bisimilarity are more efficient than ones for other weak behavioural equivalences, especially weak bisimilarity. With \(m\) the number of transitions and \(n\) the number of states, the classic \({O\left( {m n}\right) }\) algorithm was recently replaced by an \(O({m (\log \left| { Act }\right| + \log n)})\) algorithm
[the authors, ACM Trans. Comput. Log. 18, No. 2, Article No. 13, 34 p. (2017; Zbl 1367.68211)],
which is unfortunately rather complex. This paper combines its ideas with the ideas from
[\textit{A. Valmari}, Lect. Notes Comput. Sci. 5606, 123--142 (2009; Zbl 1242.68186)],
resulting in a simpler \(O({m \log n})\) algorithm. Benchmarks show that in practice this algorithm is also faster and often far more memory efficient than its predecessors, making it the best option for branching bisimulation minimisation and preprocessing for calculating other weak equivalences on LTSs.
For the entire collection see [Zbl 1471.68010].Sharp congruences adequate with temporal logics combining weak and strong modalitieshttps://www.zbmath.org/1483.682332022-05-16T20:40:13.078697Z"Lang, Frédéric"https://www.zbmath.org/authors/?q=ai:lang.frederic"Mateescu, Radu"https://www.zbmath.org/authors/?q=ai:mateescu.radu"Mazzanti, Franco"https://www.zbmath.org/authors/?q=ai:mazzanti.francoSummary: We showed in a recent paper that, when verifying a modal \(\mu \)-calculus formula, the actions of the system under verification can be partitioned into sets of so-called weak and strong actions, depending on the combination of weak and strong modalities occurring in the formula. In a compositional verification setting, where the system consists of processes executing in parallel, this partition allows us to decide whether each individual process can be minimized for either divergence-preserving branching (if the process contains only weak actions) or strong (otherwise) bisimilarity, while preserving the truth value of the formula. In this paper, we refine this idea by devising a family of bisimilarity relations, named sharp bisimilarities, parameterized by the set of strong actions. We show that these relations have all the nice properties necessary to be used for compositional verification, in particular congruence and adequacy with the logic. We also illustrate their practical utility on several examples and case-studies, and report about our success in the RERS 2019 model checking challenge.
For the entire collection see [Zbl 1471.68010].A denotational semantics for parameterised networks of synchronised automatahttps://www.zbmath.org/1483.682342022-05-16T20:40:13.078697Z"Li, Siqi"https://www.zbmath.org/authors/?q=ai:li.siqi"Madelaine, Eric"https://www.zbmath.org/authors/?q=ai:madelaine.ericSummary: Parameterised networks of synchronised automata (pNets) is a machine-oriented semantic formalism used for specifying and verifying the behaviour of distributed components or systems. In addition, it can be used to define the semantics of languages in the parallel and distributed computation area. Unlike other traditional process calculi, pNets only own one pNet node as an operator which composes all subnets running in parallel. Using this single synchronisation artifact, it is capable of expressing many operators or synchronisation mechanisms. In this paper, we explore a denotational semantics for parameterised networks. The denotational semantics of parameterised networks we investigate is based on the behaviours of their subnets. The behaviour of a subnet is determined by both its state and the actions it executes. Based on the traces of a set of subnets, the behaviour of a pNet consisting of those subnets can be deduced. A set of algebraic laws is also explored based on the denotational semantics.
For the entire collection see [Zbl 1355.68010].On the distributability of mobile ambientshttps://www.zbmath.org/1483.682352022-05-16T20:40:13.078697Z"Peters, Kirstin"https://www.zbmath.org/authors/?q=ai:peters.kirstin"Nestmann, Uwe"https://www.zbmath.org/authors/?q=ai:nestmann.uweSummary: Modern society is dependent on distributed software systems and to verify them different modelling languages such as mobile ambients were developed. To analyse the quality of mobile ambients as a good foundational model for distributed computation, we analyse the level of synchronisation between distributed components that they can express. Therefore, we rely on earlier established synchronisation patterns. It turns out that mobile ambients are not fully distributed, because they can express enough synchronisation to express a synchronisation pattern called \textsf{M}. However, they can express strictly less synchronisation than the standard pi-calculus. For this reason, we can show that there is no good and distributability-preserving encoding from the standard pi-calculus into mobile ambients and also no such encoding from mobile ambients into the join-calculus, i.e., the expressive power of mobile ambients is in between these languages. Finally, we discuss how these results can be used to obtain a fully distributed variant of mobile ambients.
For the entire collection see [Zbl 1435.68037].On distributabilityhttps://www.zbmath.org/1483.682362022-05-16T20:40:13.078697Z"Peters, Kirstin"https://www.zbmath.org/authors/?q=ai:peters.kirstin"Nestmann, Uwe"https://www.zbmath.org/authors/?q=ai:nestmann.uwe"Schmitt, Anna"https://www.zbmath.org/authors/?q=ai:schmitt.annaSummary: The Actor model of computation is the backbone for the design of some of the most successful distributed programming platforms, Erlang and Akka. It builds upon asynchronous message-passing (liberating senders from waiting) and unique message-routing (using unique Actor addresses), which enables relatively simple distributed implementations. The Actor model shares these principles with the model of computation underlying JoCaml, which supports the distributed implementation of OCaml components. In this paper, we support the distributability of the two models on theoretical grounds. For this, we investigate three refined variants of the \(\pi \)-calculus that can be understood as the essence of the two models. We show that neither of the three calculi is able to express a synchronization pattern that is known from the world of Petri nets and that was used by us as one criterion in a distributability hierarchy of process calculi. Moreover, we exhibit three encodings between the three calculi and prove that they each preserve the degree of distributability between source and target terms.`True concurrency' semantics for time Petri nets with weak time and persistent atomic policieshttps://www.zbmath.org/1483.682372022-05-16T20:40:13.078697Z"Virbitskaite, I. B."https://www.zbmath.org/authors/?q=ai:virbitskaite.irina-b|virbitskaite.irina-bonaventurovna"Zubarev, A. Yu."https://www.zbmath.org/authors/?q=ai:zubarev.andrey-yuSummary: Dense-Time Petri nets (TPNs), where time intervals for transition firings are assigned, are now a well-established model, which is used to describe and study safety-critical systems. Generally, the state space of the TPN is uncountably infinite, which increases the complexity of model analysis. `True concurrency' semantics represents the behavior of the TPN as a set of actions, the causality relation between which is modeled by a partial order, and the absence of causality implies concurrency. This representation is more adequate for verifying properties of concurrent systems, e.g., deadlock-freedom, fairness, maximum parallelism, etc. This paper introduces and investigates step semantics (based on sets of concurrent actions) and partial order semantics (based on sets of causally ordered and concurrent actions) for TPNs the behavior of which is governed by the weak time policy (i.e., the passage of model time is not constrained by transition firing) and persistent atomic memory technique (i.e., transition firing is regarded as an atomic action when clocks are reset).UTP semantics for rTiMohttps://www.zbmath.org/1483.682382022-05-16T20:40:13.078697Z"Xie, Wanling"https://www.zbmath.org/authors/?q=ai:xie.wanling"Xiang, Shuangqing"https://www.zbmath.org/authors/?q=ai:xiang.shuangqingSummary: rTiMo is a real-time version of TiMo (Timed Mobility), which is a process algebra for mobile distributed systems. In this paper, we investigate the denotational semantics for rTiMo. A trace variable tr is introduced to record the communications among processes as well as the location where the communication action takes place. Based on the formalized model, we study a set of algebraic laws, especially the laws about the migration and communication with real-time constraints. In order to facilitate the algebraic reasoning about the parallel expansion laws, we enrich rTiMo with a form of guarded choice. This enables us to convert every parallel construct to a guarded choice.
For the entire collection see [Zbl 1355.68010].Specifying multi-enablings using flow event structure based partial order logichttps://www.zbmath.org/1483.682392022-05-16T20:40:13.078697Z"Yan, Wei"https://www.zbmath.org/authors/?q=ai:yan.wei"Wu, Jin Zhao"https://www.zbmath.org/authors/?q=ai:wu.jinzhao"Gao, Xin Yan"https://www.zbmath.org/authors/?q=ai:gao.xinyan(no abstract)Complexity of fixed point counting problems in Boolean networkshttps://www.zbmath.org/1483.682402022-05-16T20:40:13.078697Z"Bridoux, Florian"https://www.zbmath.org/authors/?q=ai:bridoux.florian"Durbec, Amélia"https://www.zbmath.org/authors/?q=ai:durbec.amelia"Perrot, Kevin"https://www.zbmath.org/authors/?q=ai:perrot.kevin"Richard, Adrien"https://www.zbmath.org/authors/?q=ai:richard.adrienSummary: A \textit{Boolean network} (BN) with \(n\) components is a discrete dynamical system described by the successive iterations of a function \(f : \{ 0,1\}^n \to \{ 0,1\}^n\). This model finds applications in biology, where fixed points play a central role. For example, in genetic regulations, they correspond to cell phenotypes. In this context, experiments reveal the existence of positive or negative influences among components. The digraph of influences is called \textit{signed interaction digraph} (SID), and one SID may correspond to a large number of BNs. The present work opens a new perspective on the well-established study of fixed points in BNs. When biologists discover the SID of a BN they do not know, they may ask: given that SID, can it correspond to a BN having at least/at most \(k\) fixed points? Depending on the input, we prove that these problems are in \(\mathsf{P}\) or complete for \(\mathsf{NP}\), \(\mathsf{NP}^{\mathsf{NP}}\), \(\mathsf{NP}^{\#\mathsf{P}}\) or \(\mathsf{NEXPTIME}\).A reflexive algorithm for the rook problemhttps://www.zbmath.org/1483.682412022-05-16T20:40:13.078697Z"Kralchev, Dobromir P."https://www.zbmath.org/authors/?q=ai:kralchev.dobromir-p"Dimov, Dimcho S."https://www.zbmath.org/authors/?q=ai:dimov.dimcho-s"Penev, Alexander P."https://www.zbmath.org/authors/?q=ai:penev.alexander-pSummary: We propose a new, heuristic algorithm for the rook problem. The algorithm is reflexive: it examines its own running-time, which is in correlation with the output.Model counting with error-correcting codeshttps://www.zbmath.org/1483.682422022-05-16T20:40:13.078697Z"Achlioptas, Dimitris"https://www.zbmath.org/authors/?q=ai:achlioptas.dimitris"Theodoropoulos, Panos"https://www.zbmath.org/authors/?q=ai:theodoropoulos.panosSummary: The idea of counting the number of satisfying truth assignments (models) of a formula by adding random parity constraints can be traced back to the seminal work of Valiant and Vazirani showing that NP is as easy as detecting unique solutions. While theoretically sound, the random parity constraints used in that construction suffer from the following drawback: each constraint, on average, involves half of all variables. As a result, the branching factor associated with searching for models that also satisfy the parity constraints quickly gets out of hand. In this work we prove that one can work with much shorter parity constraints and still get rigorous mathematical guarantees, especially when the number of models is large so that many constraints need to be added. Our work is motivated by the realization that the essential feature for a system of parity constraints to be useful in probabilistic model counting is that its set of solutions resembles an error-correcting code.Ultimate TreeAutomizer (CHC-COMP tool description)https://www.zbmath.org/1483.682432022-05-16T20:40:13.078697Z"Dietsch, Daniel"https://www.zbmath.org/authors/?q=ai:dietsch.daniel"Heizmann, Matthias"https://www.zbmath.org/authors/?q=ai:heizmann.matthias"Hoenicke, Jochen"https://www.zbmath.org/authors/?q=ai:hoenicke.jochen"Nutz, Alexander"https://www.zbmath.org/authors/?q=ai:nutz.alexander"Podelski, Andreas"https://www.zbmath.org/authors/?q=ai:podelski.andreasSummary: We present Ultimate TreeAutomizer, a solver for satisfiability of sets of constrained Horn clauses. Constrained Horn clauses (CHC) are a fragment of first order logic with attractive properties in terms of expressiveness and accessibility to algorithmic solving. Ultimate TreeAutomizer is based on the techniques of trace abstraction, tree automata and tree interpolation. This paper serves as a tool description for TreeAutomizer in CHC-COMP 2019.
For the entire collection see [Zbl 1441.68008].\(N\)-level modulo-based CNF encodings of pseudo-Boolean constraints for MaxSAThttps://www.zbmath.org/1483.682442022-05-16T20:40:13.078697Z"Zha, Aolong"https://www.zbmath.org/authors/?q=ai:zha.aolong"Koshimura, Miyuki"https://www.zbmath.org/authors/?q=ai:koshimura.miyuki"Fujita, Hiroshi"https://www.zbmath.org/authors/?q=ai:fujita.hiroshi.2Summary: Many combinatorial problems in various fields can be translated to Maximum Satisfiability (MaxSAT) problems. Although the general problem is \(\mathcal{N}\mathcal{P}\)-hard, more and more practical problems may be solved due to the significant effort which has been devoted to the development of efficient solvers. The art of constraints encoding is as important as the art of devising algorithms for MaxSAT. In this paper, we present several encoding methods of pseudo-Boolean constraints into Boolean satisfiability problems in Conjunctive Normal Form (CNF) formula, which are based on the idea of modular arithmetic and only generate auxiliary variables for each unique combination of weights. These techniques are efficient in encoding and solving MaxSAT problems. In particular, our solvers won the partial MaxSAT industrial category from 2010 through 2012 and ranked second in the 2017 main weighted track of the MaxSAT evaluation. We prove the correctness and the pseudo-polynomial space complexity of our encodings and also give a heuristics of the base selection for modular arithmetic. Our experimental results show that our encoding compactly encodes the constraints, and the obtained clauses are efficiently handled by a state-of-the-art SAT solver.Safe sets in graphs: graph classes and structural parametershttps://www.zbmath.org/1483.682452022-05-16T20:40:13.078697Z"Águeda, Raquel"https://www.zbmath.org/authors/?q=ai:agueda.raquel"Cohen, Nathann"https://www.zbmath.org/authors/?q=ai:cohen.nathann"Fujita, Shinya"https://www.zbmath.org/authors/?q=ai:fujita.shinya"Legay, Sylvain"https://www.zbmath.org/authors/?q=ai:legay.sylvain"Manoussakis, Yannis"https://www.zbmath.org/authors/?q=ai:manoussakis.yannis-g"Matsui, Yasuko"https://www.zbmath.org/authors/?q=ai:matsui.yasuko"Montero, Leandro"https://www.zbmath.org/authors/?q=ai:montero.leandro-p"Naserasr, Reza"https://www.zbmath.org/authors/?q=ai:naserasr.reza"Otachi, Yota"https://www.zbmath.org/authors/?q=ai:otachi.yota"Sakuma, Tadashi"https://www.zbmath.org/authors/?q=ai:sakuma.tadashi"Tuza, Zsolt"https://www.zbmath.org/authors/?q=ai:tuza.zsolt"Xu, Renyu"https://www.zbmath.org/authors/?q=ai:xu.renyuSummary: A safe set of a graph \(G=(V,E)\) is a non-empty subset \(S\) of \(V\) such that for every component \(A\) of \(G[S]\) and every component \(B\) of \(G[V \setminus S]\), we have \(|A| \geq |B|\) whenever there exists an edge of \(G\) between \(A\) and \(B\). In this paper, we show that a minimum safe set can be found in polynomial time for trees. We then further extend the result and present polynomial-time algorithms for graphs of bounded treewidth, and also for interval graphs. We also study the parameterized complexity of the problem. We show that the problem is fixed-parameter tractable when parameterized by the solution size. Furthermore, we show that this parameter lies between tree-depth and vertex cover number.
For the entire collection see [Zbl 1377.68004].Parametric multi-step scheme for GPU-accelerated graph decomposition into strongly connected componentshttps://www.zbmath.org/1483.682462022-05-16T20:40:13.078697Z"Aldegheri, Stefano"https://www.zbmath.org/authors/?q=ai:aldegheri.stefano"Barnat, Jiří"https://www.zbmath.org/authors/?q=ai:barnat.jiri"Bombieri, Nicola"https://www.zbmath.org/authors/?q=ai:bombieri.nicola"Busato, Federico"https://www.zbmath.org/authors/?q=ai:busato.federico"Češka, Milan"https://www.zbmath.org/authors/?q=ai:ceska.milanSummary: The problem of decomposing a directed graph into strongly connected components (SCCs) is a fundamental graph problem that is inherently present in many scientific and commercial applications. Clearly, there is a strong need for good high-performance, e.g., GPU-accelerated, algorithms to solve it. Unfortunately, among existing GPU-enabled algorithms to solve the problem, there is none that can be considered the best on every graph, disregarding the graph characteristics. Indeed, the choice of the right and most appropriate algorithm to be used is often left to inexperienced users. In this paper, we introduce a novel parametric multi-step scheme to evaluate existing GPU-accelerated algorithms for SCC decomposition in order to alleviate the burden of the choice and to help the user to identify which combination of existing techniques for SCC decomposition would fit an expected use case the most. We support our scheme with an extensive experimental evaluation that dissects correlations between the internal structure of GPU-based algorithms and their performance on various classes of graphs. The measurements confirm that there is no algorithm that would beat all other algorithms in the decomposition on all of the classes of graphs. Our contribution thus represents an important step towards an ultimate solution of automatically adjusted scheme for the GPU-accelerated SCC decomposition.
For the entire collection see [Zbl 1364.68008].On the (parameterized) complexity of recognizing well-covered \((r,\ell)\)-graphshttps://www.zbmath.org/1483.682472022-05-16T20:40:13.078697Z"Alves, Sancrey Rodrigues"https://www.zbmath.org/authors/?q=ai:alves.sancrey-rodrigues"Dabrowski, Konrad K."https://www.zbmath.org/authors/?q=ai:dabrowski.konrad-kazimierz"Faria, Luerbio"https://www.zbmath.org/authors/?q=ai:faria.luerbio"Klein, Sulamita"https://www.zbmath.org/authors/?q=ai:klein.sulamita"Sau, Ignasi"https://www.zbmath.org/authors/?q=ai:sau.ignasi"dos Santos Souza, Uéverton"https://www.zbmath.org/authors/?q=ai:dos-santos-souza.uevertonSummary: An \((r,\ell)\)-partition of a graph \(G\) is a partition of its vertex set into \(r\) independent sets and \(\ell\) cliques. A graph is \((r,\ell)\) if it admits an \((r,\ell)\)-partition. A graph is well-covered if every maximal independent set is also maximum. A graph is \((r,\ell)\)-well-covered if it is both \((r,\ell)\) and well-covered. In this paper we consider two different decision problems. In the \((r,\ell)\)-Well-Covered Graph problem (\((r,\ell)\)WCG for short), we are given a graph \(G\), and the question is whether \(G\) is an \((r,\ell)\)-well-covered graph. In the Well-Covered \((r,\ell)\)-Graph problem (WC\((r,\ell)\)G for short), we are given an \((r,\ell)\)-graph \(G\) together with an \((r,\ell)\)-partition of \(V(G)\) into \(r\) independent sets and \(\ell\) cliques, and the question is whether \(G\) is well-covered. We classify most of these problems into P, coNP-complete, NP-complete, NP-hard, or coNP-hard. Only the cases WC\((r,0)\)G for \(r\geq 3\) remain open. In addition, we consider the parameterized complexity of these problems for several choices of parameters, such as the size \(\alpha\) of a maximum independent set of the input graph, its neighborhood diversity, or the number \(\ell\) of cliques in an \((r,\ell)\)-partition. In particular, we show that the parameterized problem of deciding whether a general graph is well-covered parameterized by \(\alpha\) can be reduced to the WC\((0,\ell )\)G problem parameterized by \(\ell\), and we prove that this latter problem is in XP but does not admit polynomial kernels unless \(\mathrm{coNP} \subseteq \mathrm{NP/poly}\).
For the entire collection see [Zbl 1377.68004].Latency, capacity, and distributed minimum spanning treeshttps://www.zbmath.org/1483.682482022-05-16T20:40:13.078697Z"Augustine, John"https://www.zbmath.org/authors/?q=ai:augustine.john-e"Gilbert, Seth"https://www.zbmath.org/authors/?q=ai:gilbert.seth"Kuhn, Fabian"https://www.zbmath.org/authors/?q=ai:kuhn.fabian"Robinson, Peter"https://www.zbmath.org/authors/?q=ai:robinson.peter.1"Sourav, Suman"https://www.zbmath.org/authors/?q=ai:sourav.sumanSummary: We study the cost of distributed MST construction in the setting where each edge has a latency and a capacity, along with the weight. Edge latencies capture the delay on the links of the communication network, while capacity captures their throughput (in this case, the rate at which messages can be sent). Depending on how the edge latencies relate to the edge weights, we provide several tight bounds on the time and messages required to construct an MST. When edge weights exactly correspond with the latencies, we show that, perhaps interestingly, the bottleneck parameter in determining the running time of an algorithm is the total weight \(W\) of the MST (rather than the total number of nodes \(n\), as in the standard \textsf{CONGEST} model). That is, we show a tight bound of \(\widetilde{\Theta}(D+\sqrt{W/c})\) rounds, where \(D\) refers to the latency diameter of the graph, \(W\) refers to the total weight of the constructed MST and edges have capacity \(c\). The proposed algorithm sends \(\widetilde{O}(m + W)\) messages, where \(m\), the total number of edges in the network graph under consideration, is a known lower bound on message complexity for MST construction. We also show that \(\Omega (W)\) is a lower bound for fast MST constructions. When the edge latencies and the corresponding edge weights are unrelated, and either can take arbitrary values, we show that (unlike the sub-linear time algorithms in the standard \textsf{CONGEST} model, on small diameter graphs), the best time complexity that can be achieved is \(\widetilde{\Theta}(D + n / c)\). However, if we restrict all edges to have equal latency \(\ell\) and capacity \(c\) while having possibly different weights (weights could deviate arbitrarily from \(\ell)\), we give an algorithm that constructs an MST in \(\widetilde{O}(D + \sqrt{n \ell / c})\) time. In each case, we provide nearly matching upper and lower bounds.On the density of non-simple 3-planar graphshttps://www.zbmath.org/1483.682492022-05-16T20:40:13.078697Z"Bekos, Michael A."https://www.zbmath.org/authors/?q=ai:bekos.michael-a"Kaufmann, Michael"https://www.zbmath.org/authors/?q=ai:kaufmann.michael"Raftopoulou, Chrysanthi N."https://www.zbmath.org/authors/?q=ai:raftopoulou.chrysanthi-nSummary: A \(k\)-planar graph is a graph that can be drawn in the plane such that every edge is crossed at most \(k\) times. For \(k \leq 4\),
\textit{J. Pach} and \textit{G. Tóth} [Combinatorica 17, No. 3, 427--439 (1997; Zbl 0902.05017)]
proved a bound of \((k+3)(n-2)\) on the total number of edges of a \(k\)-planar graph, which is tight for \(k=1,2\). For \(k=3\), the bound of \(6n-12\) has been improved to \(\frac{11}{2}n-11\)
in [\textit{J. Pach} et al., Discrete Comput. Geom. 36, No. 4, 527--552 (2006; Zbl 1104.05022)]
and has been shown to be optimal up to an additive constant for simple graphs. In this paper, we prove that the bound of \(\frac{11}{2}n-11\) edges also holds for non-simple 3-planar graphs that admit drawings in which non-homotopic parallel edges and self-loops are allowed. Based on this result, a characterization of optimal 3-planar graphs (that is, 3-planar graphs with \(n\) vertices and exactly \(\frac{11}{2}n-11\) edges) might be possible, as to the best of our knowledge the densest known simple 3-planar is not known to be optimal.
For the entire collection see [Zbl 1352.68012].Medians in median graphs and their cube complexes in linear timehttps://www.zbmath.org/1483.682502022-05-16T20:40:13.078697Z"Bénéteau, Laurine"https://www.zbmath.org/authors/?q=ai:beneteau.laurine"Chalopin, Jérémie"https://www.zbmath.org/authors/?q=ai:chalopin.jeremie"Chepoi, Victor"https://www.zbmath.org/authors/?q=ai:chepoi.victor-d"Vaxès, Yann"https://www.zbmath.org/authors/?q=ai:vaxes.yannSummary: The median of a set of vertices \(P\) of a graph \(G\) is the set of all vertices \(x\) of \(G\) minimizing the sum of distances from \(x\) to all vertices of \(P\). In this paper, we present a linear time algorithm to compute medians in median graphs. We also present a linear time algorithm to compute medians in the associated \(\ell_1\)-cube complexes. Our algorithm is based on the majority rule characterization of medians in median graphs and on a fast computation of parallelism classes of edges \((\Theta\)-classes) via Lexicographic Breadth First Search (LexBFS). We show that any LexBFS ordering of the vertices of a median graph satisfies the following \textit{fellow traveler property}: the parents of any two adjacent vertices are also adjacent. Using the fast computation of the \(\Theta\)-classes, we also compute the Wiener index (total distance) in linear time and the distance matrix in optimal quadratic time.Computational aspects of treewidth for graphhttps://www.zbmath.org/1483.682512022-05-16T20:40:13.078697Z"Bykova, V. V."https://www.zbmath.org/authors/?q=ai:bykova.valentina-vladimirovnaSummary: The paper gives a brief overview of recent results on the graph treewidth problem. We investigate some of the lower and upper bounds for treewidth, and present algorithmic methods to improve these bounds.Computational aspects of treewidth for graphhttps://www.zbmath.org/1483.682522022-05-16T20:40:13.078697Z"Bykova, V. V."https://www.zbmath.org/authors/?q=ai:bykova.valentina-vladimirovnaSummary: A brief overview of recent results on the problem of treewidth for the graph is given; some of the lower and upper bounds for treewidth are investigated; algorithmic methods to improve these bounds are presented.1-bend RAC drawings of 1-planar graphshttps://www.zbmath.org/1483.682532022-05-16T20:40:13.078697Z"Didimo, Walter"https://www.zbmath.org/authors/?q=ai:didimo.walter"Liotta, Giuseppe"https://www.zbmath.org/authors/?q=ai:liotta.giuseppe"Mehrabi, Saeed"https://www.zbmath.org/authors/?q=ai:mehrabi.saeed"Montecchiani, Fabrizio"https://www.zbmath.org/authors/?q=ai:montecchiani.fabrizioSummary: A graph is 1-planar if it has a drawing where each edge is crossed at most once. A drawing is RAC (right angle crossing) if the edges cross only at right angles. The relationships between 1-planar graphs and RAC drawings have been partially studied in the literature. It is known that there are both 1-planar graphs that are not straight-line RAC drawable and graphs that have a straight-line RAC drawing but that are not 1-planar
[\textit{P. Eades} and the second author, Discrete Appl. Math. 161, No. 7--8, 961--969 (2013; Zbl 1408.05042)].
Also, straight-line RAC drawings always exist for IC-planar graphs
[\textit{F. J. Brandenburg} et al., Theor. Comput. Sci. 636, 1--16 (2016; Zbl 1342.68251)],
a subclass of 1-planar graphs. One of the main questions still open is whether every 1-planar graph has a RAC drawing with at most one bend per edge. We positively answer this question.
For the entire collection see [Zbl 1352.68012].Cliques, packings of segments and binary matriceshttps://www.zbmath.org/1483.682542022-05-16T20:40:13.078697Z"Dimov, Dimcho S."https://www.zbmath.org/authors/?q=ai:dimov.dimcho-s"Kralchev, Dobromir P."https://www.zbmath.org/authors/?q=ai:kralchev.dobromir-p"Penev, Alexander P."https://www.zbmath.org/authors/?q=ai:penev.alexander-p(no abstract)Finding disjoint paths on edge-colored graphs: a multivariate complexity analysishttps://www.zbmath.org/1483.682552022-05-16T20:40:13.078697Z"Dondi, Riccardo"https://www.zbmath.org/authors/?q=ai:dondi.riccardo"Sikora, Florian"https://www.zbmath.org/authors/?q=ai:sikora.florianSummary: The problem of finding the maximum number of vertex-disjoint uni-color paths in an edge-colored graph (Max CDP) has been recently introduced in literature, motivated by applications in social network analysis. In this paper we investigate how the complexity of the problem depends on graph parameters (distance from disjoint paths and size of vertex cover), and that is not FPT-approximable. Moreover, we introduce a new variant of the problem, called Max CDDP, whose goal is to find the maximum number of vertex-disjoint and color-disjoint uni-color paths. We extend some of the results of Max CDP to this new variant, and we prove that unlike Max CDP, Max CDDP is already hard on graphs at distance two from disjoint paths.
For the entire collection see [Zbl 1377.68004].Approximability of the distance independent set problem on regular graphs and planar graphshttps://www.zbmath.org/1483.682562022-05-16T20:40:13.078697Z"Eto, Hiroshi"https://www.zbmath.org/authors/?q=ai:eto.hiroshi"Ito, Takehiro"https://www.zbmath.org/authors/?q=ai:ito.takehiro"Liu, Zhilong"https://www.zbmath.org/authors/?q=ai:liu.zhilong"Miyano, Eiji"https://www.zbmath.org/authors/?q=ai:miyano.eijiSummary: This paper studies generalized variants of the Maximum Independent Set problem, called the Maximum Distance-\(d\) Independent Set problem (\(\mathsf{MaxD}d\mathsf{IS}\) for short). For an integer \(d \geq 2\), a distance-\(d\) independent set of an unweighted graph \(G = (V, E)\) is a subset \(S \subseteq V\) of vertices such that for any pair of vertices \(u, v \in S\), the number of edges in any path between \(u\) and \(v\) is at least \(d\) in \(G\). Given an unweighted graph \(G\), the goal of \(\mathsf{MaxD}d\mathsf{IS}\) is to find a maximum-cardinality distance-\(d\) independent set of \(G\). In this paper, we analyze the (in)approximability of the problem on \(r\)-regular graphs (\(r\geq 3\)) and planar graphs, as follows: (1) For every fixed integers \(d\geq 3\) and \(r\geq 3\), \(\mathsf{MaxD}d\mathsf{IS}\) on \(r\)-regular graphs is APX-hard. (2) We design polynomial-time \(O(r^{d-1})\)-approximation and \(O(r^{d-2}/d)\)-approximation algorithms for \(\mathsf{MaxD}d\mathsf{IS}\) on \(r\)-regular graphs. (3) We sharpen the above \(O(r^{d-2}/d)\)-approximation algorithms when restricted to \(d=r=3\), and give a polynomial-time 2-approximation algorithm for MaxD3IS on cubic graphs. (4) Finally, we show that \(\mathsf{MaxD}d\mathsf{IS}\) admits a polynomial-time approximation scheme (PTAS) for planar graphs.
For the entire collection see [Zbl 1377.68004].Time-optimal broadcasting of multiple messages in 1-in port modelhttps://www.zbmath.org/1483.682572022-05-16T20:40:13.078697Z"Gregor, Petr"https://www.zbmath.org/authors/?q=ai:gregor.petr"Škrekovski, Riste"https://www.zbmath.org/authors/?q=ai:skrekovski.riste"Vukašinović, Vida"https://www.zbmath.org/authors/?q=ai:vukasinovic.vidaSummary: In the 1-in port model, every vertex of a synchronous network can receive each time unit at most one message. We consider simultaneous broadcasting of multiple messages from the same source in such networks with an additional restriction that every received message can be sent out to neighbors only in the next time unit and never to already informed vertex. We use a general concept of level-disjoint partitions developed for this scenario. Here we introduce a subgraph extension technique for efficient spreading information within this concept. Surprisingly, this approach with so called biwheels leads to simultaneous broadcasting of optimal number of messages on a wide class of graphs in optimal time. In particular, we provide tight results for bipartite tori, meshes, hypercubes. Several problems and conjectures are proposed.
For the entire collection see [Zbl 1377.68004].Preprocessing vertex-deletion problems: characterizing graph properties by low-rank adjacencieshttps://www.zbmath.org/1483.682582022-05-16T20:40:13.078697Z"Jansen, Bart M. P."https://www.zbmath.org/authors/?q=ai:jansen.bart-m-p"de Kroon, Jari J. H."https://www.zbmath.org/authors/?q=ai:de-kroon.jari-j-hSummary: We consider the \(\Pi\)\textsc{-free Deletion} problem parameterized by the size of a vertex cover, for a range of graph properties \(\Pi\). Given an input graph \(G\), this problem asks whether there is a subset of at most \(k\) vertices whose removal ensures the resulting graph does not contain a graph from \(\Pi\) as an induced subgraph. We introduce the concept of \textit{characterizing a graph property} \(\Pi\) \textit{by low-rank adjacencies}, and use it as the cornerstone of a general kernelization theorem for \(\Pi\)\textsc{-Free Deletion} parameterized by the size of a vertex cover. The resulting framework captures problems such as \textsc{AT-Free Deletion}, \textsc{Wheel-free Deletion}, and \textsc{Interval Deletion}. Moreover, our new framework shows that the vertex-deletion problem to perfect graphs has a polynomial kernel when parameterized by vertex cover, thereby resolving an open question by \textit{F. V. Fomin} et al. [J. Comput. Syst. Sci. 80, No. 2, 468--495 (2014; Zbl 1277.68095)].A distributed algorithm for spectral sparsification of graphs with applications to data clusteringhttps://www.zbmath.org/1483.682592022-05-16T20:40:13.078697Z"Mendoza-Granada, Fabricio"https://www.zbmath.org/authors/?q=ai:mendoza-granada.fabricio"Villagra, Marcos"https://www.zbmath.org/authors/?q=ai:villagra.marcosSummary: Spectral sparsification is a technique that is used to reduce the number of non-zero entries in a positive semidefinite matrix with little changes to its spectrum. In particular, the main application of spectral sparsification is to construct sparse graphs whose spectra are close to a given dense graph. We study spectral sparsification under the assumption that the edges of a graph are allocated among sites which can communicate among each other. In this work we show that if a graph is allocated among several sites, the union of the spectral sparsifiers of each induced subgraph give us an spectral sparsifier of the original graph. In contrast to other works in the literature, we present precise computations of the approximation factor of the union of spectral sparsifiers and give an explicit calculation of the edge weights. Then we present an application of this result to data clustering in the Number-On-Forehead model of multiparty communication complexity when input data is allocated as a sunflower among sites in the party.
For the entire collection see [Zbl 1465.05002].Improved approximation algorithms for \(k\)-connected \(m\)-dominating set problemshttps://www.zbmath.org/1483.682602022-05-16T20:40:13.078697Z"Nutov, Zeev"https://www.zbmath.org/authors/?q=ai:nutov.zeevSummary: A graph is \(k\)-\textbf{connected} if it has \(k\) pairwise internally node disjoint paths between every pair of its nodes. A subset \(S\) of nodes in a graph \(G\) is a \(k\)-\textbf{connected} set if the subgraph \(G[S]\) induced by \(S\) is \(k\)-connected; \(S\) is an \(m\)-\textbf{dominating set} if every \(v\in V\setminus S\) has at least \(m\) neighbors in \(S\). If \(S\) is both \(k\)-connected and \(m\)-dominating then \(S\) is a \(k\)-\textbf{connected} \(m\)-\textbf{dominating set}, or \((k,m)\)-cds for short. In the \(k\)-\textsc{Connected} \(m\)-\textsc{Dominating Set} (\((k,m)\)-\textsc{CDS}) problem the goal is to find a minimum weight \((k,m)\)-cds in a node-weighted graph. We consider the case \(m\geq k\) and obtain the following approximation ratios. For unit disc graphs we obtain ratio \(O(k\ln k)\), improving the ratio \(O(k^2\ln k)\) of
[\textit{Y. Shi} et al., ``Approximation algorithm for minimum weight fault-tolerant virtual backbone in unit disk graphs'', IEEE/ACM Trans. Netw. 25, No. 2, 925--933 (2017; \url{doi:10.1109/TNET.2016.2607723});
\textit{T. Fukunaga}, Algorithmica 80, No. 11, 3270--3292 (2018; Zbl 1414.05224)].
For general graphs we obtain the first non-trivial approximation ratio \(O(k^2\ln n)\).Inserting an edge into a geometric embeddinghttps://www.zbmath.org/1483.682612022-05-16T20:40:13.078697Z"Radermacher, Marcel"https://www.zbmath.org/authors/?q=ai:radermacher.marcel"Rutter, Ignaz"https://www.zbmath.org/authors/?q=ai:rutter.ignazAuthors' abstract: The algorithm to insert an edge \(e\) in linear time into a planar graph \(G\) with a minimal number of crossings on \(e\) [\textit{C. Gutwenger} et al., Algorithmica 41, No. 4, 289--308 (2005; Zbl 1065.68075)] is a helpful tool for designing heuristics that minimize edge crossings in topological drawings of general graphs. Unfortunately, not all such topological drawings are \textit{stretchable}, i.e., there may not exist an equivalent straight-line drawing. That is, there is no planar straight-line drawing \(\Gamma\) of \(G\) such that in \(\Gamma + e\) the edge \(e\) crosses the same edges as in the topological drawing of \(G + e\) and it does so in the same order. This motivates the study of the computational complexity of the problem \textsc{Geometric Edge Insertion}: Given a combinatorially embedded graph \(G\), compute a geometric embedding \(\Gamma\) of \(G\) that minimizes the crossings in \(\Gamma + e\).
We give a characterization of the stretchable topological drawings of \(G + e\) that also applies to the case where the outer face is fixed; this answers an open question of \textit{P. Eades} et al. [Lect. Notes Comput. Sci. 9214, 301--313 (2015; Zbl 1444.68141)]. Algorithmically, we focus on the case where the outer face is not fixed. We show that \textsc{Geometric Edge Insertion} can be solved efficiently for graphs of maximum degree 5. For the general case, we show a \((\Delta - 2)\)-approximation, where \(\Delta\) is the maximum vertex degree of \(G\) and an FPT algorithm with respect to the minimum number of crossings. Finally, we consider the problem of testing whether there exists a solution of \textsc{Geometric Edge Insertion} that achieves the lower bound obtained by a topological insertion.
Reviewer: Xueliang Li (Tianjin)Twins in subdivision drawings of hypergraphshttps://www.zbmath.org/1483.682622022-05-16T20:40:13.078697Z"van Bevern, René"https://www.zbmath.org/authors/?q=ai:van-bevern.rene"Kanj, Iyad"https://www.zbmath.org/authors/?q=ai:kanj.iyad-a"Komusiewicz, Christian"https://www.zbmath.org/authors/?q=ai:komusiewicz.christian"Niedermeier, Rolf"https://www.zbmath.org/authors/?q=ai:niedermeier.rolf"Sorge, Manuel"https://www.zbmath.org/authors/?q=ai:sorge.manuelSummary: Visualizing hypergraphs, systems of subsets of some universe, has continuously attracted research interest in the last decades. We study a natural kind of hypergraph visualization called subdivision drawings.
\textit{K. Dinkla} et al. [``Kelp diagrams: point set membership visualization'', Comput. Graph. Forum 31, No. 3, 875--884 (2012; \url{doi:10.1111/j.1467-8659.2012.03080.x})] claimed that only few hypergraphs have a subdivision drawing. However, this statement seems to be based on the assumption (also used in previous work) that the input hypergraph does not contain twins, pairs of vertices which are in precisely the same hyperedges (subsets of the universe). We show that such vertices may be necessary for a hypergraph to admit a subdivision drawing. As a counterpart, we show that the number of such ``necessary twins'' is upper-bounded by a function of the number \(m\) of hyperedges and a further parameter \(r\) of the desired drawing related to its number of layers. This leads to a linear-time algorithm for determining such subdivision drawings if \(m\) and \(r\) are constant; in other words, the problem is linear-time fixed-parameter tractable with respect to the parameters \(m\) and \(r\).
For the entire collection see [Zbl 1352.68012].Hyperbolic node embedding for temporal networkshttps://www.zbmath.org/1483.682632022-05-16T20:40:13.078697Z"Wang, Lili"https://www.zbmath.org/authors/?q=ai:wang.lili.4|wang.lili.3|wang.lili.6|wang.lili|wang.lili.7|wang.lili.2|wang.lili.8|wang.lili.5|wang.lili.1"Huang, Chenghan"https://www.zbmath.org/authors/?q=ai:huang.chenghan"Ma, Weicheng"https://www.zbmath.org/authors/?q=ai:ma.weicheng"Liu, Ruibo"https://www.zbmath.org/authors/?q=ai:liu.ruibo"Vosoughi, Soroush"https://www.zbmath.org/authors/?q=ai:vosoughi.soroushSummary: Generating general-purpose vector representations of networks allows us to analyze them without the need for extensive feature-engineering. Recent works have shown that the hyperbolic space can naturally represent the structure of networks, and that embedding networks into hyperbolic space is extremely efficient, especially in low dimensions. However, the existing hyperbolic embedding methods apply to static networks and cannot capture the dynamic evolution of the nodes and edges of a temporal network. In this paper, we present an unsupervised framework that uses temporal random walks to obtain training samples with both temporal and structural information to learn hyperbolic embeddings from continuous-time dynamic networks. We also show how the framework extends to attributed and heterogeneous information networks. Through experiments on five publicly available real-world temporal datasets, we show the efficacy of our model in embedding temporal networks in low-dimensional hyperbolic space compared to several other unsupervised baselines. We show that our model obtains state-of-the-art performance in low dimensions, outperforming all baselines, and has competitive performance in higher dimensions, outperforming the baselines in three of the five datasets. Our results show that embedding temporal networks in hyperbolic space is extremely effective when necessitating low dimensions.Improved precise fault diagnosis algorithm for hypercube-like graphshttps://www.zbmath.org/1483.682642022-05-16T20:40:13.078697Z"Ye, Tai-Ling"https://www.zbmath.org/authors/?q=ai:ye.tai-ling"Cheng, Dun-Wei"https://www.zbmath.org/authors/?q=ai:cheng.dun-wei"Hsieh, Sun-Yuan"https://www.zbmath.org/authors/?q=ai:hsieh.sun-yuanSummary: The system reliability is an important issue for multiprocessor systems. The fault diagnosis has become crucial for achieving high reliability in multiprocessor systems. In the comparison-based model, it allows a processor to perform diagnosis by contrasting the responses from a pair of neighboring processors through sending the identical assignment. Recently, Ye and Hsieh devised an precise fault diagnosis algorithm to detect all faulty processors for hypercube-like networks by using the MM* model with \(O(N(\log _{2}N)^2)\) time complexity, where \(N\) is the cardinality of processor set in multiprocessor systems. On the basis of Hamiltonian cycle properties, we improve the aforementioned results by presenting an \(O(N)\)-time precise fault diagnosis algorithm to detect all faulty processors for hypercube-like networks by using the MM* model.
For the entire collection see [Zbl 1377.68004].The upper and lower bounds of \(R_g\)-conditional diagnosability of networkshttps://www.zbmath.org/1483.682652022-05-16T20:40:13.078697Z"Yuan, Jun"https://www.zbmath.org/authors/?q=ai:yuan.jun"Qiao, Huijuan"https://www.zbmath.org/authors/?q=ai:qiao.huijuan"Liu, Aixia"https://www.zbmath.org/authors/?q=ai:liu.aixiaSummary: The \(R_g\)-conditional diagnosability is a new generalization of conditional diagnosability, which restricts that every vertex contains at least \(g\) fault-free neighbors. Recently, Wang et al. investigated the \(R_g\)-conditional diagnosability of the \(n\)-dimensional hypercube under the PMC model, and showed a lower bound of its \(R_g\)-conditional diagnosability. In this paper, we investigate the \(R_g\)-conditional diagnosability of general networks under the PMC model, and present the lower and upper bounds of \(R_g\)-conditional diagnosability of networks under some reasonable conditions. Applying our results, an improved lower bound of \(R_g\)-conditional diagnosability of hypercubes and the lower and upper bounds of the \(R_g\)-conditional diagnosability of exchanged hypercubes are given.Interpreting the basis path set in neural networkshttps://www.zbmath.org/1483.682662022-05-16T20:40:13.078697Z"Zhu, Juanping"https://www.zbmath.org/authors/?q=ai:zhu.juanping"Meng, Qi"https://www.zbmath.org/authors/?q=ai:meng.qi"Chen, Wei"https://www.zbmath.org/authors/?q=ai:chen.wei.4|chen.wei.2|chen.wei.3|chen.wei.1|chen.wei"Ma, Zhiming"https://www.zbmath.org/authors/?q=ai:ma.zhi-mingSummary: The \(\mathcal{G}\)-SGD algorithm significantly outperforms the conventional SGD algorithm in ReLU neural networks by adopting the basis path set. However, how the inner mechanism of basis paths works remains mysterious, and the \(\mathcal{G}\)-SGD algorithm that helps to find a basis path set is heuristic. This paper employs graph theory to investigate structure properties of basis paths in a more general and complicated neural network with unbalanced layers and edge-skipping. The hierarchical Algorithm HBPS is proposed to find a basis path set, by decomposing the complicated network into several independent and parallel substructures. The paper theoretically extends the study of basis paths and provides one methodology to find the basis path set in a more general neural network.Efficient identification of \(k\)-closed stringshttps://www.zbmath.org/1483.682672022-05-16T20:40:13.078697Z"Alamro, Hayam"https://www.zbmath.org/authors/?q=ai:alamro.hayam"Alzamel, Mai"https://www.zbmath.org/authors/?q=ai:alzamel.mai"Iliopoulos, Costas S."https://www.zbmath.org/authors/?q=ai:iliopoulos.costas-s"Pissis, Solon P."https://www.zbmath.org/authors/?q=ai:pissis.solon-p"Sung, Wing-Kin"https://www.zbmath.org/authors/?q=ai:sung.wing-kin"Watts, Steven"https://www.zbmath.org/authors/?q=ai:watts.stevenA string is called closed if it has a border (prefix/suffix) without internal occurrences, e.g. \(abcbab\). In this paper, the authors extend this notion to \(k\)-closed strings, where \(k\) expresses a Hamming distance error bound. A string is \(k\)-closed if it contains a prefix and a suffix of the same length at distance bounded by \(k\) such that no other internal substring of the same length is within the same distance (or less) from this prefix and this suffix. Such a prefix/suffix pair is then called a \(k\)-border of the string.
The authors then consider the algorithmic problem of finding, given a string \(x\) over an integer alphabet and a positive integer \(k\), a \(k\)-closed border of \(x\). They show an algorithm for this problem working in \(O(kn)\) time and \(O(n)\) space, where \(n\) is the length of the string \(x\).
The paper is complemented with experimental results on an implementation of the proposed algorithm.
Reviewer: Gabriele Fici (Palermo)Unavoidable sets of partial words of uniform lengthhttps://www.zbmath.org/1483.682682022-05-16T20:40:13.078697Z"Becker, Joey"https://www.zbmath.org/authors/?q=ai:becker.joey"Blanchet-Sadri, F."https://www.zbmath.org/authors/?q=ai:blanchet-sadri.francine"Flapan, Laure"https://www.zbmath.org/authors/?q=ai:flapan.laure"Watkins, Stephen"https://www.zbmath.org/authors/?q=ai:watkins.stephenSummary: A set \(X\) of partial words over a finite alphabet \(A\) is called unavoidable if every two-sided infinite word over \(A\) has a factor compatible with an element of \(X\). Unlike the case of a set of words without holes, the problem of deciding whether or not a given finite set of \(n\) partial words over a \(k\)-letter alphabet is avoidable is NP-hard, even when we restrict to a set of partial words of uniform length. So classifying such sets, with parameters \(k\) and \(n\), as avoidable or unavoidable becomes an interesting problem. In this paper, we work towards this classification problem by investigating the maximum number of holes we can fill in unavoidable sets of partial words of uniform length over an alphabet of any fixed size, while maintaining the unavoidability property.
For the entire collection see [Zbl 1433.68013].Constructing words with high distinct square densitieshttps://www.zbmath.org/1483.682692022-05-16T20:40:13.078697Z"Blanchet-Sadri, F."https://www.zbmath.org/authors/?q=ai:blanchet-sadri.francine"Osborne, S."https://www.zbmath.org/authors/?q=ai:osborne.steven|osborne.steveSummary: Fraenkel and Simpson showed that the number of distinct squares in a word of length \(n\) is bounded from above by \(2n\), since at most two distinct squares have their rightmost, or last, occurrence begin at each position. Improvements by Ilie to \(2n-\Theta(\log n)\) and by Deza et al. to \(\lfloor 11n/6\rfloor\) rely on the study of combinatorics of FS-double-squares, when the maximum number of two last occurrences of squares begin. In this paper, we first study how to maximize runs of FS-double-squares in the prefix of a word. We show that for a given positive integer \(m\), the minimum length of a word beginning with \(m\) FS-double-squares, whose lengths are equal, is \(7m+3\). We construct such a word and analyze its distinct-square-sequence as well as its distinct-square-density. We then generalize our construction. We also construct words with high distinct-square-densities that approach 5/6.
For the entire collection see [Zbl 1433.68013].Fractional meanings of nonrepetitivenesshttps://www.zbmath.org/1483.682702022-05-16T20:40:13.078697Z"Chybowska-Sokół, Joanna"https://www.zbmath.org/authors/?q=ai:chybowska-sokol.joanna"Dębski, Michał"https://www.zbmath.org/authors/?q=ai:debski.michal"Grytczuk, Jarosław"https://www.zbmath.org/authors/?q=ai:grytczuk.jaroslaw"Junosza-Szaniawski, Konstanty"https://www.zbmath.org/authors/?q=ai:junosza-szaniawski.konstanty"Nayar, Barbara"https://www.zbmath.org/authors/?q=ai:nayar.barbara"Pastwa, Urszula"https://www.zbmath.org/authors/?q=ai:pastwa.urszula"Węsek, Krzysztof"https://www.zbmath.org/authors/?q=ai:wesek.krzysztofSummary: A sequence \(S\) is called \(r\)\textit{-nonrepetitive} if no \(r\) sequentially adjacent blocks in \(S\) are identical. By the classic results of Thue from the beginning of the 20th century, we know that there exist arbitrarily long binary 3-nonrepetitive sequences and ternary 2-nonrepetitive sequences. This discovery stimulated over the years intensive research leading to various generalizations and many exciting problems and results in combinatorics on words.
In this paper, we study two \textit{fractional} versions of nonrepetitive sequences. In the first one, we demand that all subsequences of a sequence \(S\), with gaps bounded by a fixed integer \(j \geq 1\), are \(r\)-nonrepetitive. (This variant emerged from studying nonrepetitive colorings of the Euclidean plane.) Let \(\pi_j (r)\) denote the least size of an alphabet guaranteeing existence of arbitrarily long such sequences. We prove that \(\left\lceil \frac{j}{r-1} \right\rceil + 1 \leq \pi_j (r) \leq 2 \left\lceil \frac{j}{r-1} \right\rceil + 1\), for all \(r \geq 3\) and \(j \geq 1\). We also consider a more general situation with the gap bound \(j\) being a real number, and apply this to nonrepetitive coloring of the plane. The second variant allows for using a ``fractional'' alphabet, analogously as for the \textit{fractional coloring} of graphs. More specifically, we look for sequences of \(b\)-element subsets \(B_1, B_2, \ldots\) of an \(a\)-element alphabet, with the ratio \(a/b\) as small as possible, such that every member of the Cartesian product \(B_1 \times B_2 \times \cdots\) is \(r\)-nonrepetitive. By using the \textit{entropy compression} argument, we prove that the corresponding parameter \(\pi^f (r) = \inf \frac{a}{b}\) can be arbitrarily close to 1 for sufficiently large \(r\).Theta palindromes in theta conjugateshttps://www.zbmath.org/1483.682712022-05-16T20:40:13.078697Z"Mahalingam, Kalpana"https://www.zbmath.org/authors/?q=ai:mahalingam.kalpana"Pandoh, Palak"https://www.zbmath.org/authors/?q=ai:pandoh.palak"Maity, Anuran"https://www.zbmath.org/authors/?q=ai:maity.anuranSummary: A DNA string is a Watson-Crick (WK) palindrome when the complement of its reverse is equal to itself. The Watson-Crick mapping \(\theta\) is an involution that is also an antimorphism. \(\theta\)-conjugates of a word is a generalization of conjugates of a word that incorporates the notion of WK-involution \(\theta\). In this paper, we study the distribution of palindromes and Watson-Crick palindromes, also known as \(\theta\)-palindromes among both the set of conjugates and \(\theta\)-conjugates of a word \(w\). We also consider some general properties of the set \(C_{\theta}(w)\), i.e., the set of \(\theta\)-conjugates of a word \(w\), and characterize words \(w\) such that \(|C_{\theta}(w)|=|w|+1\), i.e., with the maximum number of elements in \(C_{\theta}(w)\). We also find the structure of words that have at least one (WK)-palindrome in \(C_{\theta}(w)\).
For the entire collection see [Zbl 1464.68016].Entropy and monotonicity in artificial intelligencehttps://www.zbmath.org/1483.682722022-05-16T20:40:13.078697Z"Bouchon-Meunier, Bernadette"https://www.zbmath.org/authors/?q=ai:bouchon-meunier.bernadette"Marsala, Christophe"https://www.zbmath.org/authors/?q=ai:marsala.christopheSummary: Entropies and measures of information are extensively used in several domains and applications in Artificial Intelligence. Among the original quantities from Information theory and Probability theory, a lot of extensions have been introduced to take into account fuzzy sets, intuitionistic fuzzy sets and other representation models of uncertainty and imprecision. In this paper, we propose a study of the common property of monotonicity of such measures with regard to a refinement of information, showing that the main differences between these quantities come from the diversity of orders defining such a refinement. Our aim is to propose a clarification of the concept of refinement of information and the underlying monotonicity, and to illustrate this paradigm by the utilisation of such measures in Artificial Intelligence.An approach to one-bit compressed sensing based on probably approximately correct learning theoryhttps://www.zbmath.org/1483.682732022-05-16T20:40:13.078697Z"Ahsen, Mehmet Eren"https://www.zbmath.org/authors/?q=ai:ahsen.mehmet-eren"Vidyasagar, Mathukumalli"https://www.zbmath.org/authors/?q=ai:vidyasagar.mathukumalliSummary: In this paper, the problem of one-bit compressed sensing (OBCS) is formulated as a problem in probably approximately correct (PAC) learning. It is shown that the Vapnik-Chervonenkis (VC-) dimension of the set of half-spaces in \(\mathbb{R}^n\) generated by \(k\)-sparse vectors is bounded below by \(k ( \lfloor\lg (n/k) \rfloor +1 )\) and above by \(\lfloor 2k \lg (en) \rfloor \). By coupling this estimate with well-established results in PAC learning theory, we show that a consistent algorithm can recover a \(k\)-sparse vector with \(O(k \lg n)\) measurements, given only the signs of the measurement vector. This result holds for \textit{all} probability measures on \(\mathbb{R}^n\). The theory is also applicable to the case of noisy labels, where the signs of the measurements are flipped with some unknown probability.K-plex cover pooling for graph neural networkshttps://www.zbmath.org/1483.682742022-05-16T20:40:13.078697Z"Bacciu, Davide"https://www.zbmath.org/authors/?q=ai:bacciu.davide"Conte, Alessio"https://www.zbmath.org/authors/?q=ai:conte.alessio"Grossi, Roberto"https://www.zbmath.org/authors/?q=ai:grossi.roberto"Landolfi, Francesco"https://www.zbmath.org/authors/?q=ai:landolfi.francesco"Marino, Andrea"https://www.zbmath.org/authors/?q=ai:marino.andreaSummary: Graph pooling methods provide mechanisms for structure reduction that are intended to ease the diffusion of context between nodes further in the graph, and that typically leverage community discovery mechanisms or node and edge pruning heuristics. In this paper, we introduce a novel pooling technique which borrows from classical results in graph theory that is non-parametric and generalizes well to graphs of different nature and connectivity patterns. Our pooling method, named KPlexPool, builds on the concepts of graph covers and \(k\)-plexes, i.e. pseudo-cliques where each node can miss up to \(k\) links. The experimental evaluation on benchmarks on molecular and social graph classification shows that KPlexPool achieves state of the art performances against both parametric and non-parametric pooling methods in the literature, despite generating pooled graphs based solely on topological information.Assessing the data complexity of imbalanced datasetshttps://www.zbmath.org/1483.682752022-05-16T20:40:13.078697Z"Barella, Victor H."https://www.zbmath.org/authors/?q=ai:barella.victor-h"Garcia, Luís P. F."https://www.zbmath.org/authors/?q=ai:garcia.luis-p-f"de Souto, Marcilio C. P."https://www.zbmath.org/authors/?q=ai:de-souto.marcilio-c-p"Lorena, Ana C."https://www.zbmath.org/authors/?q=ai:lorena.ana-carolina"de Carvalho, André C. P. L. F."https://www.zbmath.org/authors/?q=ai:ponce-de-leon-ferreira-de-carvalho.andre-carlosSummary: Imbalanced datasets are an important challenge in supervised Machine Learning (ML). According to the literature, class imbalance does not necessarily impose difficulties for ML algorithms. Difficulties mainly arise from other characteristics, such as overlapping between classes and complex decision boundaries. For binary classification tasks, calculating imbalance is straightforward, e.g., the ratio between class sizes. However, measuring more relevant characteristics, such as class overlapping, is not trivial. In the past years, complexity measures able to assess more relevant dataset characteristics have been proposed. In this paper, we investigate their effectiveness on real imbalanced datasets and how they are affected by applying different data imbalance treatments (DIT). For such, we perform two data-driven experiments: (1) We adapt the complexity measures to the context of imbalanced datasets. The experimental results show that our proposed measures assess the difficulty of imbalanced problems better than the original ones. We also compare the results with the state-of-art on data complexity measures for imbalanced datasets. (2) We analyze the behavior of complexity measures before and after applying DITs. According to the results, the difference in data complexity, in general, correlates to the predictive performance improvement obtained by applying DITs to the original datasets.Differentially private distance learning in categorical datahttps://www.zbmath.org/1483.682762022-05-16T20:40:13.078697Z"Battaglia, Elena"https://www.zbmath.org/authors/?q=ai:battaglia.elena"Celano, Simone"https://www.zbmath.org/authors/?q=ai:celano.simone"Pensa, Ruggero G."https://www.zbmath.org/authors/?q=ai:pensa.ruggero-gSummary: Most privacy-preserving machine learning methods are designed around continuous or numeric data, but categorical attributes are common in many application scenarios, including clinical and health records, census and survey data. Distance-based methods, in particular, have limited applicability to categorical data, since they do not capture the complexity of the relationships among different values of a categorical attribute. Although distance learning algorithms exist for categorical data, they may disclose private information about individual records if applied to a secret dataset. To address this problem, we introduce a differentially private family of algorithms for learning distances between any pair of values of a categorical attribute according to the way they are co-distributed with the values of other categorical attributes forming the so-called context. We define different variants of our algorithm and we show empirically that our approach consumes little privacy budget while providing accurate distances, making it suitable in distance-based applications, such as clustering and classification.On version space compressionhttps://www.zbmath.org/1483.682772022-05-16T20:40:13.078697Z"Ben-David, Shai"https://www.zbmath.org/authors/?q=ai:ben-david.shai"Urner, Ruth"https://www.zbmath.org/authors/?q=ai:urner.ruthSummary: We study compressing labeled data samples so as to maintain version space information. While classic compression schemes
[\textit{N. Littlestone} and \textit{M. K. Warmuth}, ``Relating data compression and learnability'', unpublished manuscript (1986)]
only ask for recovery of a samples' labels, many applications, such as distributed learning, require compact representations of more diverse information which is contained in a given data sample. In this work, we propose and analyze various frameworks for compression schemes designed to allow for recovery of version spaces. We consider exact versus approximate recovery as well as compression to subsamples versus compression to subsets of the version space. For all frameworks, we provide some positive examples and sufficient conditions for compressibility while also pointing out limitations by formally establishing impossibility of compression for certain classes.
For the entire collection see [Zbl 1346.68016].A novel method for speed training acceleration of recurrent neural networkshttps://www.zbmath.org/1483.682782022-05-16T20:40:13.078697Z"Bilski, Jarosław"https://www.zbmath.org/authors/?q=ai:bilski.jaroslaw"Rutkowski, Leszek"https://www.zbmath.org/authors/?q=ai:rutkowski.leszek"Smoląg, Jacek"https://www.zbmath.org/authors/?q=ai:smolag.jacek"Tao, Dacheng"https://www.zbmath.org/authors/?q=ai:tao.dachengSummary: Although recurrent neural networks (RNNs) perfectly solve many difficult problems, their computational complexity significantly increases training time. Therefore, the primary problem with applying RNNs is to shorten the time needed to train and operate a network. An effective solution to this problem is to use parallel processing. In the paper, a particular approach for the Jordan network will be shown, however, the presented idea is applicable to other RNN structures. This type of network is characterized by natural parallelism, and in the paper, this feature is used to significantly accelerate the learning process. High-performance learning has been achieved using a novel parallel three-dimensional architecture. The presented solutions can be implemented in digital hardware.Forward-backward selection with early droppinghttps://www.zbmath.org/1483.682792022-05-16T20:40:13.078697Z"Borboudakis, Giorgos"https://www.zbmath.org/authors/?q=ai:borboudakis.giorgos"Tsamardinos, Ioannis"https://www.zbmath.org/authors/?q=ai:tsamardinos.ioannisSummary: Forward-backward selection is one of the most basic and commonly-used feature selection algorithms available. It is also general and conceptually applicable to many different types of data. In this paper, we propose a heuristic that significantly improves its running time, while preserving predictive performance. The idea is to temporarily discard the variables that are conditionally independent with the outcome given the selected variable set. Depending on how those variables are reconsidered and reintroduced, this heuristic gives rise to a family of algorithms with increasingly stronger theoretical guarantees. In distributions that can be faithfully represented by Bayesian networks or maximal ancestral graphs, members of this algorithmic family are able to correctly identify the Markov blanket in the sample limit. In experiments we show that the proposed heuristic increases computational efficiency by about 1--2 orders of magnitude, while selecting fewer or the same number of variables and
retaining
predictive performance. Furthermore, we show that the proposed algorithm and feature selection with LASSO perform similarly when restricted to select the same number of variables, making the proposed algorithm an attractive alternative for problems where no (efficient) algorithm for LASSO exists.The maximum cosine framework for deriving perceptron based linear classifiershttps://www.zbmath.org/1483.682802022-05-16T20:40:13.078697Z"Bshouty, Nader H."https://www.zbmath.org/authors/?q=ai:bshouty.nader-h"Haddad-Zaknoon, Catherine A."https://www.zbmath.org/authors/?q=ai:haddad-zaknoon.catherine-aSummary: In this work, we introduce a mathematical framework, called the Maximum Cosine Framework or MCF, for deriving new linear classifiers. The method is based on selecting an appropriate bound on the cosine of the angle between the target function and the algorithm's. To justify its correctness, we use the MCF to show how to regenerate the update rule of aggressive ROMMA
[\textit{Y. Li} and \textit{P. M. Long}, Mach. Learn. 46, No. 1--3, 361--387 (2002; Zbl 0998.68110)].
Moreover, we construct a cosine bound from which we build the maximum cosine perceptron algorithm or, for short, the MCP algorithm. We prove that the MCP shares the same mistake bound like the perceptron
[\textit{F. Rosenblatt}, ``The perceptron: a probabilistic model for information storage and organization in the brain'', Psychol. Rev. 65, No. 6, 386--408 (1958; \url{doi:10.1037/h0042519})].
In addition, we demonstrate the promising performance of the MCP on a real dataset. Our experiments show that, under the restriction of single pass learning, the MCP algorithm outperforms PA
[\textit{K. Crammer} et al., J. Mach. Learn. Res. 7, 551--585 (2006; Zbl 1222.68177)]
and aggressive ROMMA.
For the entire collection see [Zbl 1346.68016].Unlabeled sample compression schemes and corner peelings for ample and maximum classeshttps://www.zbmath.org/1483.682812022-05-16T20:40:13.078697Z"Chalopin, Jérémie"https://www.zbmath.org/authors/?q=ai:chalopin.jeremie"Chepoi, Victor"https://www.zbmath.org/authors/?q=ai:chepoi.victor-d"Moran, Shay"https://www.zbmath.org/authors/?q=ai:moran.shay"Warmuth, Manfred K."https://www.zbmath.org/authors/?q=ai:warmuth.manfred-kSummary: We examine connections between combinatorial notions that arise in machine learning and topological notions in cubical/simplicial geometry. These connections enable to export results from geometry to machine learning. Our first main result is based on a geometric construction by
\textit{H. Tracy Hall} [Counterexamples in discrete geometry. University of California (PhD thesis) (2004)]
of a partial shelling of the cross-polytope which can not be extended. From it, we derive a maximum class of VC dimension 3 without corners. This refutes several previous works in machine learning. In particular, it implies that the previous constructions of optimal unlabeled sample compression schemes for maximum classes are erroneous. On the positive side we present a new construction of an optimal unlabeled sample compression scheme for maximum classes. We leave as open whether our unlabeled sample compression scheme extends to ample classes, which generalize maximum classes. Towards resolving this question, we provide a geometric characterization in terms of unique sink orientations of the associated 1-inclusion graph.A two-stage method for spectral-spatial classification of hyperspectral imageshttps://www.zbmath.org/1483.682822022-05-16T20:40:13.078697Z"Chan, Raymond H."https://www.zbmath.org/authors/?q=ai:chan.raymond-hon-fu"Kan, Kelvin K."https://www.zbmath.org/authors/?q=ai:kan.kelvin-k"Nikolova, Mila"https://www.zbmath.org/authors/?q=ai:nikolova.mila"Plemmons, Robert J."https://www.zbmath.org/authors/?q=ai:plemmons.robert-jamesSummary: We propose a novel two-stage method for the classification of hyperspectral images. Pixel-wise classifiers, such as the classical support vector machine (SVM), consider spectral information only. As spatial information is not utilized, the classification results are not optimal and the classified image may appear noisy. Many existing methods, such as morphological profiles, superpixel segmentation, and composite kernels, exploit the spatial information. In this paper, we propose a two-stage approach inspired by image denoising and segmentation to incorporate the spatial information. In the first stage, SVMs are used to estimate the class probability for each pixel. In the second stage, a convex variant of the Mumford-Shah model is applied to each probability map to denoise and segment the image into different classes. Our proposed method effectively utilizes both spectral and spatial information of the data sets and is fast as only convex minimization is needed in addition to the SVMs. Experimental results on three widely utilized real hyperspectral data sets indicate that our method is very competitive in accuracy, timing, and the number of parameters when compared with current state-of-the-art methods, especially when the inter-class spectra are similar or the percentage of training pixels is reasonably high.RSMOTE: a self-adaptive robust SMOTE for imbalanced problems with label noisehttps://www.zbmath.org/1483.682832022-05-16T20:40:13.078697Z"Chen, Baiyun"https://www.zbmath.org/authors/?q=ai:chen.baiyun"Xia, Shuyin"https://www.zbmath.org/authors/?q=ai:xia.shuyin"Chen, Zizhong"https://www.zbmath.org/authors/?q=ai:chen.zizhong"Wang, Binggui"https://www.zbmath.org/authors/?q=ai:wang.binggui"Wang, Guoyin"https://www.zbmath.org/authors/?q=ai:wang.guoyinSummary: Imbalanced classification is an important task in supervised learning, and \textit{Synthetic Minority Over-sampling Technique} (SMOTE) is the most common method to address it. However, the performance of SMOTE deteriorates in the presence of label noise. Current generalizations of SMOTE try to tackle this problem by either selecting some samples in minority class as seed samples or combining SMOTE with a certain noise filter. Unfortunately, the former approach usually introduces extra parameters difficult to be optimized, and the latter one relies heavily on the performance of certain specific noise filter. In this paper, a self-adaptive robust SMOTE, called RSMOTE, is proposed for imbalanced classification with label noise. In RSMOTE, relative density has been introduced to measure the local density of every minority sample, and the non-noisy minority samples are divided into the borderline samples and safe samples adaptively basing their distinguishing characteristics of relative density. In addition, we reweigh the number that needs to be generated by every minority samples based on its chaotic level. Furthermore, we generate new samples within in the borderline area and safe area respectively to enhance the separability of the boundary. RSMOTE does not rely on any specific noise filter nor introduce any extra parameters. The experimental results demonstrate that the proposed approach performs better than the comparison methods in terms of several metrics, including Precision, Recall, Area Under the Curve (AUC), F1-measure, and G-mean. The implementation of the proposed RSMOTE in programming language Python is available at \url{https://github.com/syxiaa/RSMOTE}.Mirrored conditional random field model for object recognition in indoor environmentshttps://www.zbmath.org/1483.682842022-05-16T20:40:13.078697Z"Chen, Haotian"https://www.zbmath.org/authors/?q=ai:chen.haotian"Sun, Fengchi"https://www.zbmath.org/authors/?q=ai:sun.fengchi"Yuan, Jing"https://www.zbmath.org/authors/?q=ai:yuan.jing"Huang, Yalou"https://www.zbmath.org/authors/?q=ai:huang.yalouSummary: Traditional object recognition algorithms are based on a commonly adopted closed-set hypothesis, assuming that the knowledge given in training is complete. However, real situations are often open and nonstatic, in which case the models only obtain incomplete knowledge during the training phase. This paper proposes a new type of conditional random field (CRF) model to solve a special case of incomplete knowledge, in which the visual appearance of certain objects changes significantly between training and testing, and as a result, certain unary features (features of individual objects) extracted from red green blue-depth (RGB-D) images are no longer reliable. Mirror nodes are introduced into the architecture based on the standard CRF model to build the mirrored conditional random field (Mirror-CRF) model, which integrates two types of object nodes: original nodes and mirror nodes. The mirror nodes have no unary features, only pairwise features, which describe relationships between two objects and are more reliable than unary features for object recognition in the case of appearance variation. The experimental results show that the Mirror-CRF model reduces the influence of significant changes in the appearance of certain objects and improves the object recognition ability under the condition of incomplete knowledge.A hypergrid based adaptive learning method for detecting data faults in wireless sensor networkshttps://www.zbmath.org/1483.682852022-05-16T20:40:13.078697Z"Chen, Lingqiang"https://www.zbmath.org/authors/?q=ai:chen.lingqiang"Li, Guanghui"https://www.zbmath.org/authors/?q=ai:li.guanghui"Huang, Guangyan"https://www.zbmath.org/authors/?q=ai:huang.guangyanSummary: In wireless sensor networks (WSNs), data anomalies/faults often occur due to the limited resources and unreliability of sensor nodes. Many traditional anomaly detection methods are designed in a batch manner, but for the nature of streaming data in WSNs, continuous anomaly detection method is preferred. Existing methods often detect only a single type of faults but cannot detect multiple types of faults that actually are more common in the sensor data. Therefore, this paper provides a Hypergrid based Adaptive Detection of Faults (HADF) method, which adopts hypergrid and statistical analysis to recognize three types of faults in the sensor data, including outliers, stuck-at faults, and noisy faults. HADF is a distributed method running on sensor nodes, which can reduce the influence of concept drift in unstable streaming data through combining both lazy leaning and continuous learning to adaptively update its normal profile. In the experimental study, we have manually inserted different types of faults into two real-world datasets, and the results demonstrate that HADF achieves higher accuracy with reasonable efficiency for detecting the data faults than four counterpart methods.Enhanced balanced min cuthttps://www.zbmath.org/1483.682862022-05-16T20:40:13.078697Z"Chen, Xiaojun"https://www.zbmath.org/authors/?q=ai:chen.xiaojun.1"Hong, Weijun"https://www.zbmath.org/authors/?q=ai:hong.weijun"Nie, Feiping"https://www.zbmath.org/authors/?q=ai:nie.feiping"Huang, Joshua Zhexue"https://www.zbmath.org/authors/?q=ai:huang.joshua-zhexue"Shen, Li"https://www.zbmath.org/authors/?q=ai:shen.liSummary: Spectral clustering is a hot topic and many spectral clustering algorithms have been proposed. These algorithms usually solve the discrete cluster indicator matrix by relaxing the original problems, obtaining the continuous solution and finally obtaining a discrete solution that is close to the continuous solution. However, such methods often result in a non-optimal solution to the original problem since the different steps solve different problems. In this paper, we propose a novel spectral clustering method, named as Enhanced Balanced Min Cut (EBMC). In the new method, a new normalized cut model is proposed, in which a set of balance parameters are learned to capture the differences among different clusters. An iterative method with proved convergence is used to effectively solve the new model without eigendecomposition. Theoretical analysis reveals the connection between EBMC and the classical normalized cut. Extensive experimental results show the effectiveness and efficiency of our approach in comparison with the state-of-the-art methods.Margin-based generalization for classifications with input noisehttps://www.zbmath.org/1483.682872022-05-16T20:40:13.078697Z"Choe, Hi Jun"https://www.zbmath.org/authors/?q=ai:choe.hi-jun"Koh, Hayeong"https://www.zbmath.org/authors/?q=ai:koh.hayeong"Lee, Jimin"https://www.zbmath.org/authors/?q=ai:lee.jiminSummary: Although machine learning shows state-of-the-art performance in a variety of fields, it is short a theoretical understanding of how machine learning works. Recently, theoretical approaches are actively being studied, and there are results for one of them, margin and its distribution. In this paper, especially we focused on the role of margin in the perturbations of inputs and parameters. We show a generalization bound for two cases, a linear model for binary classification and neural networks for multi-classification, when the inputs have normal distributed random noises. The additional generalization term caused by random noises is related to margin and exponentially inversely proportional to the noise level for binary classification. And in neural networks, the additional generalization term depends on (input dimension) \(\times\) (norms of input and weights). For these results, we used the PAC-Bayesian framework. This paper is considering random noises and margin together, and it will be helpful to a better understanding of model sensitivity and the construction of robust generalization.Global minima of overparameterized neural networkshttps://www.zbmath.org/1483.682882022-05-16T20:40:13.078697Z"Cooper, Yaim"https://www.zbmath.org/authors/?q=ai:cooper.yaimAdaptation based on generalized discrepancyhttps://www.zbmath.org/1483.682892022-05-16T20:40:13.078697Z"Cortes, Corinna"https://www.zbmath.org/authors/?q=ai:cortes.corinna"Mohri, Mehryar"https://www.zbmath.org/authors/?q=ai:mohri.mehryar"Muñoz Medina, Andrés"https://www.zbmath.org/authors/?q=ai:munoz-medina.andresSummary: We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm, (DM), previously shown to outperform a number of algorithms for this problem. Unlike many previously proposed solutions for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. Instead, the reweighting depends on the hypothesis sought. The algorithm is derived from a less conservative notion of discrepancy than the DM algorithm called \textit{generalized discrepancy}. We present a detailed description of our algorithm and show that it can be formulated as a convex optimization problem. We also give a detailed theoretical analysis of its learning guarantees which helps us select its parameters. Finally, we report the results of experiments demonstrating that it improves upon discrepancy minimization.Transport model for feature extractionhttps://www.zbmath.org/1483.682902022-05-16T20:40:13.078697Z"Czaja, Wojciech"https://www.zbmath.org/authors/?q=ai:czaja.wojciech"Dong, Dong"https://www.zbmath.org/authors/?q=ai:dong.dong.1|dong.dong"Jabin, Pierre-Emmanuel"https://www.zbmath.org/authors/?q=ai:jabin.pierre-emmanuel"Ndjakou Njeunje, Franck Olivier"https://www.zbmath.org/authors/?q=ai:ndjakou-njeunje.franck-olivierLearning light transport the reinforced wayhttps://www.zbmath.org/1483.682912022-05-16T20:40:13.078697Z"Dahm, Ken"https://www.zbmath.org/authors/?q=ai:dahm.ken"Keller, Alexander"https://www.zbmath.org/authors/?q=ai:keller.alexanderSummary: We show that the equations of reinforcement learning and light transport simulation are related integral equations. Based on this correspondence, a scheme to learn importance while sampling path space is derived. The new approach is demonstrated in a consistent light transport simulation algorithm that uses reinforcement learning to progressively learn where light comes from. As using this information for importance sampling includes information about visibility, too, the number of light transport paths with zero contribution is dramatically reduced, resulting in much less noisy images within a fixed time budget.
For the entire collection see [Zbl 1400.65006].Smooth neighborhood recommender systemshttps://www.zbmath.org/1483.682922022-05-16T20:40:13.078697Z"Dai, Ben"https://www.zbmath.org/authors/?q=ai:dai.ben"Wang, Junhui"https://www.zbmath.org/authors/?q=ai:wang.junhui"Shen, Xiaotong"https://www.zbmath.org/authors/?q=ai:shen.xiaotong"Qu, Annie"https://www.zbmath.org/authors/?q=ai:qu.annieSummary: Recommender systems predict users' preferences over a large number of items by pooling similar information from other users and/or items in the presence of sparse observations. One major challenge is how to utilize user-item specific covariates and networks describing user-item interactions in a high-dimensional situation, for accurate personalized prediction. In this article, we propose a smooth neighborhood recommender in the framework of the latent factor models. A similarity kernel is utilized to borrow neighborhood information from continuous covariates over a user-item specific network, such as a user's social network, where the grouping information defined by discrete covariates is also integrated through the network. Consequently, user-item specific information is built into the recommender to battle the ``cold-start'' issue in the absence of observations in collaborative and content-based filtering. Moreover, we utilize a ``divide-and-conquer'' version of the alternating least squares
algorithm
to achieve scalable computation, and establish asymptotic results for the proposed method, demonstrating that it achieves superior prediction accuracy. Finally, we illustrate that the proposed method improves substantially over its competitors in simulated examples and real benchmark data -- \textit{Last.fm} music data.Learning the clustering of longitudinal shape data sets into a mixture of independent or branching trajectorieshttps://www.zbmath.org/1483.682932022-05-16T20:40:13.078697Z"Debavelaere, Vianney"https://www.zbmath.org/authors/?q=ai:debavelaere.vianney"Durrleman, Stanley"https://www.zbmath.org/authors/?q=ai:durrleman.stanley"Allassonnière, Stéphanie"https://www.zbmath.org/authors/?q=ai:allassonniere.stephanieSummary: Given repeated observations of several subjects over time, i.e. a longitudinal data set, this paper introduces a new model to learn a classification of the shapes progression in an unsupervised setting: we automatically cluster a longitudinal data set in different classes without labels. Our method learns for each cluster an average shape trajectory (or representative curve) and its variance in space and time. Representative trajectories are built as the combination of pieces of curves. This mixture model is flexible enough to handle independent trajectories for each cluster as well as fork and merge scenarios. The estimation of such non linear mixture models in high dimension is known to be difficult because of the trapping states effect that hampers the optimisation of cluster assignments during training. We address this issue by using a tempered version of the stochastic EM algorithm. Finally, we apply our algorithm on different data sets. First, synthetic data are used to show that a tempered scheme achieves better convergence. We then apply our method to different real data sets: 1D RECIST score used to monitor tumors growth, 3D facial expressions and meshes of the hippocampus. In particular, we show how the method can be used to test different scenarios of hippocampus atrophy in ageing by using an heteregenous population of normal ageing individuals and mild cognitive impaired subjects.Finite-time performance of distributed temporal-difference learning with linear function approximationhttps://www.zbmath.org/1483.682942022-05-16T20:40:13.078697Z"Doan, Thinh T."https://www.zbmath.org/authors/?q=ai:doan.thinh-t"Maguluri, Siva Theja"https://www.zbmath.org/authors/?q=ai:maguluri.siva-theja"Romberg, Justin"https://www.zbmath.org/authors/?q=ai:romberg.justin-kMonotone learning with rectified wire networkshttps://www.zbmath.org/1483.682952022-05-16T20:40:13.078697Z"Elser, Veit"https://www.zbmath.org/authors/?q=ai:elser.veit"Schmidt, Dan"https://www.zbmath.org/authors/?q=ai:schmidt.dan"Yedidia, Jonathan"https://www.zbmath.org/authors/?q=ai:yedidia.jonathan-sSummary: We introduce a new neural network model, together with a tractable and monotone online learning algorithm. Our model describes feed-forward networks for classification, with one output node for each class. The only nonlinear operation is rectification using a ReLU function with a bias. However, there is a rectifier on every edge rather than at the nodes of the network. There are also weights, but these are positive, static, and associated with the nodes. Our \textit{rectified wire networks} are able to represent arbitrary Boolean functions. Only the bias parameters, on the edges of the network, are learned. Another departure in our approach, from standard neural networks, is that the loss function is replaced by a constraint. This constraint is simply that the value of the output node associated with the correct class should be zero. Our model has the property that the exact norm-minimizing parameter update, required to correctly classify a training item, is the solution to a quadratic program
that can be
computed with a few passes through the network. We demonstrate a training algorithm using this update, called sequential deactivation (SDA), on MNIST and some synthetic datasets. Upon adopting a natural choice for the nodal weights, SDA has no hyperparameters other than those describing the network structure. Our experiments explore behavior with respect to network size and depth in a family of sparse expander networks.Graphical Lasso and thresholding: equivalence and closed-form solutionshttps://www.zbmath.org/1483.682962022-05-16T20:40:13.078697Z"Fattahi, Salar"https://www.zbmath.org/authors/?q=ai:fattahi.salar"Sojoudi, Somayeh"https://www.zbmath.org/authors/?q=ai:sojoudi.somayehSummary: Graphical Lasso (GL) is a popular method for learning the structure of an undirected graphical model, which is based on an \(l_1\) regularization technique. The objective of this paper is to compare the computationally-heavy GL technique with a numerically-cheap heuristic method that is based on simply thresholding the sample covariance matrix. To this end, two notions of sign-consistent and inverse-consistent matrices are developed, and then it is shown that the thresholding and GL methods are equivalent if: (i) the thresholded sample covariance matrix is both sign-consistent and inverse-consistent, and (ii) the gap between the largest thresholded and the smallest un-thresholded entries of the sample covariance matrix is not too small. By building upon this result, it is proved that the GL method -- as a conic optimization problem -- has an explicit closed-form solution if the thresholded sample covariance matrix has an acyclic structure. This result is then generalized to arbitrary sparse
support graphs,
where a formula is found to obtain an approximate solution of GL. Furthermore, it is shown that the approximation error of the derived explicit formula decreases exponentially fast with respect to the length of the minimum-length cycle of the sparsity graph. The developed results are demonstrated on synthetic data, functional MRI data, traffic flows for transportation networks, and massive randomly generated data sets. We show that the proposed method can obtain an accurate approximation of the GL for instances with the sizes as large as \(80,000\times 80,000\) (more than 3.2 billion variables) in less than 30 minutes on a standard laptop computer running MATLAB, while other state-of-the-art methods do not converge within 4 hours.Joint PLDA for simultaneous modeling of two factorshttps://www.zbmath.org/1483.682972022-05-16T20:40:13.078697Z"Ferrer, Luciana"https://www.zbmath.org/authors/?q=ai:ferrer.luciana"McLaren, Mitchell"https://www.zbmath.org/authors/?q=ai:mclaren.mitchellSummary: Probabilistic linear discriminant analysis (PLDA) is a method used for biometric problems like speaker or face recognition that models the variability of the samples using two latent variables, one that depends on the class of the sample and another one that is assumed independent across samples and models the within-class variability. In this work, we propose a generalization of PLDA that enables joint modeling of two sample-dependent factors: the class of interest and a nuisance condition. The approach does not change the basic form of PLDA but rather modifies the training procedure to consider the dependency across samples of the latent variable that models within-class variability. While the identity of the nuisance condition is needed during training, it is not needed during testing since we propose a scoring procedure that marginalizes over the corresponding latent variable. We show results on a multilingual speaker-verification task, where the language spoken is considered a nuisance
condition.
The proposed joint PLDA approach leads to significant performance gains in this task for two different data sets, in particular when the training data contains mostly or only monolingual speakers.Improvements of AdaBoost for multi-label classificationhttps://www.zbmath.org/1483.682982022-05-16T20:40:13.078697Z"Fu, Zhong Liang"https://www.zbmath.org/authors/?q=ai:fu.zhongliang"Zhang, Dan Pu"https://www.zbmath.org/authors/?q=ai:zhang.dan-pu"Wang, Li Li"https://www.zbmath.org/authors/?q=ai:wang.lili|wang.lili.2|wang.lili.6|wang.lili.1|wang.lili.3|wang.lili.8|wang.lili.5|wang.lili.7|wang.lili.4(no abstract)Fitted Q-iteration by functional networks for control problemshttps://www.zbmath.org/1483.682992022-05-16T20:40:13.078697Z"Gaeta, Matteo"https://www.zbmath.org/authors/?q=ai:gaeta.matteo"Loia, Vincenzo"https://www.zbmath.org/authors/?q=ai:loia.vincenzo"Miranda, Sergio"https://www.zbmath.org/authors/?q=ai:miranda.sergio"Tomasiello, Stefania"https://www.zbmath.org/authors/?q=ai:tomasiello.stefaniaSummary: In this paper a new offline model-free approximate Q-iteration is proposed. Following the idea of Fitted Q-iteration, we use a computational scheme based on Functional Networks, which have been proved to be a powerful alternative to Neural Networks, because they do not require a large number of training samples. We state a condition for the convergence of the proposed technique and we apply it to three classical control problems, namely, a DC motor, a pendulum swing up, a robotic arm. We present a comparative study to show the approximation capabilities of our method with a relatively small number of training samples.The relationship between agnostic selective classification, active learning and the disagreement coefficienthttps://www.zbmath.org/1483.683002022-05-16T20:40:13.078697Z"Gelbhart, Roei"https://www.zbmath.org/authors/?q=ai:gelbhart.roei"El-Yaniv, Ran"https://www.zbmath.org/authors/?q=ai:el-yaniv.ranSummary: A selective classifier \((f,g)\) comprises a classification function \(f\) and a binary selection function \(g\), which determines if the classifier abstains from prediction, or uses \(f\) to predict. The classifier is called pointwise-competitive if it classifies each point identically to the best classifier in hindsight (from the same class), whenever it does not abstain. The quality of such a classifier is quantified by its rejection mass, defined to be the probability mass of the points it rejects. A ``fast'' rejection rate is achieved if the rejection mass is bounded from above by \(\tilde{O}(1/m)\) where \(m\) is the number of labeled examples used to train the classifier (and \(\tilde{O}\) hides logarithmic factors). Pointwise-competitive selective (PCS) classifiers are intimately related to disagreement-based active learning and it is known that in the realizable case, a fast rejection rate of a known PCS algorithm (called Consistent Selective Strategy) is equivalent to an exponential
speedup of the well-known CAL active algorithm. \par We focus on the agnostic setting, for which there is a known algorithm called LESS that learns a PCS classifier and achieves a fast rejection rate (depending on Hannekeâ's disagreement coefficient) under strong assumptions. We present an improved PCS learning algorithm called ILESS for which we show a fast rate (depending on Hanneke's disagreement coefficient) without any assumptions. Our rejection bound smoothly interpolates the realizable and agnostic settings. The main result of this paper is an equivalence between the following three entities: (i) the existence of a fast rejection rate for any PCS learning algorithm (such as ILESS); (ii) a poly-logarithmic bound for Hanneke's disagreement coefficient; and (iii) an exponential speedup for a new disagreement-based active learner called ActiveiLESS.Attention based adversarially regularized learning for network embeddinghttps://www.zbmath.org/1483.683012022-05-16T20:40:13.078697Z"He, Jieyue"https://www.zbmath.org/authors/?q=ai:he.jieyue"Wang, Jinmeng"https://www.zbmath.org/authors/?q=ai:wang.jinmeng"Yu, Zhizhou"https://www.zbmath.org/authors/?q=ai:yu.zhizhouSummary: Network embedding, also known as graph embedding and network representation learning, is an effective method for representing graphs or network data in a low-dimensional space. Most existing methods focus on preserving network topology and minimizing the reconstruction errors to learn a low-dimensional embedding vector representation of the network. In addition, some researchers are devoted to the embedding learning of attribute networks. These researchers usually study the two matrices of network structure and network attributes separately, and then merge them to realize the embedding learning representation of attribute networks. These studies have different performances on a variety of downstream tasks. However, most of these methods have two problems: first, these methods mostly use shallow model to learn structure or attribute embedding, which do not make full use of the rich information contained in the network, such as the neighborhood information of nodes; second, the distribution of the learned network low-dimensional vector representation is overlooked, which leads to poor generalization ability of the model in some real-world network data. Therefore, this paper proposes an adversarially regularized network representation learning model based on attention mechanism, which encodes the topology features and content information of the network into a low-dimensional embedding vector representation through a graph attention autoencoder. Meanwhile, through an adversarial training schema, the learned low-dimensional vector representation could circumvent the requirement of an explicit prior distribution, and thus obtain better generalization ability. Extensive experiments on tasks of link prediction and node clustering demonstrate the effectiveness of learned network embeddings.Splicing learning: a novel few-shot learning approachhttps://www.zbmath.org/1483.683022022-05-16T20:40:13.078697Z"Hu, Lianting"https://www.zbmath.org/authors/?q=ai:hu.lianting"Liang, Huiying"https://www.zbmath.org/authors/?q=ai:liang.huiying"Lu, Long"https://www.zbmath.org/authors/?q=ai:lu.longSummary: In recent years, among most approaches for few-shot learning, there exists a default premise that a big homogeneous-annotated dataset is applied to pre-train the few-shot learning model. However, since few-shot learning approaches are always used in the domain where annotated samples are rare, it would be difficult to collect another big annotated dataset in the same domain. Therefore, we propose Splicing Learning to complete the few-shot learning task without the help of a big homogeneous-annotated dataset. Splicing Learning can increase the sample size of the few-shot set by splicing multiple original images to a spliced-image. Unlike data augmentation technologies, there is no false information on the spliced-image. Through experiments, we find that the configuration ``All-splice + WSG'' can achieve the best test accuracy of 90.81\%, 9.19\% better than the baseline. The performance improvement of the model can be attributed to Splicing Learning mostly and has little to do with the complexity of the CNN framework. Compared with metric learning, meta-learning, and GAN models, both of Splicing Learning and data augmentation have achieved more outstanding performance. At the same time, the combination of Splicing Learning and data augmentation can further improve the test accuracy of the model to 96.33\%. The full implementation is available at \url{https://github.com/xiangxiangzhuyi/Splicing-learning}.Corruption-tolerant bandit learninghttps://www.zbmath.org/1483.683032022-05-16T20:40:13.078697Z"Kapoor, Sayash"https://www.zbmath.org/authors/?q=ai:kapoor.sayash"Patel, Kumar Kshitij"https://www.zbmath.org/authors/?q=ai:patel.kumar-kshitij"Kar, Purushottam"https://www.zbmath.org/authors/?q=ai:kar.purushottamSummary: We present algorithms for solving multi-armed and linear-contextual bandit tasks in the face of adversarial corruptions in the arm responses. Traditional algorithms for solving these problems assume that nothing but mild, e.g., i.i.d. sub-Gaussian, noise disrupts an otherwise clean estimate of the utility of the arm. This assumption and the resulting approaches can fail catastrophically if there is an observant adversary that corrupts even a small fraction of the responses generated when arms are pulled. To rectify this, we propose algorithms that use recent advances in robust statistical estimation to perform arm selection in polynomial time. Our algorithms are easy to implement and vastly outperform several existing UCB and EXP-style algorithms for stochastic and adversarial multi-armed and linear-contextual bandit problems in wide variety of experimental settings. Our algorithms enjoy minimax-optimal regret bounds, as well as can tolerate an adversary that is allowed to corrupt upto a universally constant fraction of the arms pulled by the algorithm.Parsimonious online learning with kernels via sparse projections in function spacehttps://www.zbmath.org/1483.683042022-05-16T20:40:13.078697Z"Koppel, Alec"https://www.zbmath.org/authors/?q=ai:koppel.alec"Warnell, Garrett"https://www.zbmath.org/authors/?q=ai:warnell.garrett"Stump, Ethan"https://www.zbmath.org/authors/?q=ai:stump.ethan"Ribeiro, Alejandro"https://www.zbmath.org/authors/?q=ai:ribeiro.alejandro-rSummary: Despite their attractiveness, popular perception is that techniques for nonparametric function approximation do not scale to streaming data due to an intractable growth in the amount of storage they require. To solve this problem in a memory-affordable way, we propose an online technique based on functional stochastic gradient descent in tandem with supervised sparsification based on greedy function subspace projections. The method, called parsimonious online learning with kernels (POLK), provides a controllable tradeoff between its solution accuracy and the amount of memory it requires. We derive conditions under which the generated function sequence converges almost surely to the optimal function, and we establish that the memory requirement remains finite. We evaluate POLK for kernel multi-class logistic regression and kernel hinge-loss classification on three canonical data sets: a synthetic Gaussian mixture model, the MNIST hand-written digits, and the Brodatz texture database. On all
three tasks,
we observe a favorable trade-off of objective function evaluation, classification performance, and complexity of the nonparametric regressor extracted by the proposed method.Use of static surrogates in hyperparameter optimizationhttps://www.zbmath.org/1483.683052022-05-16T20:40:13.078697Z"Lakhmiri, Dounia"https://www.zbmath.org/authors/?q=ai:lakhmiri.dounia"Le Digabel, Sébastien"https://www.zbmath.org/authors/?q=ai:le-digabel.sebastienSummary: Optimizing the hyperparameters and architecture of a neural network is a long yet necessary phase in most applications. This consuming process can benefit from strategies designed to discard low-quality configurations and quickly focus on more promising candidates. This work aims at enhancing HyperNOMAD, a library that adapts a direct search derivative-free optimization algorithm to tune both the architecture and the training of a neural network simultaneously. Two static surrogates are developed to trigger an early stopping during the configuration evaluation and strategically rank a pool of candidates. These additions to HyperNOMAD are shown to reduce its resource consumption by orders of magnitude without harming the quality of the proposed solutions.Hadamard matrix guided online hashinghttps://www.zbmath.org/1483.683062022-05-16T20:40:13.078697Z"Lin, Mingbao"https://www.zbmath.org/authors/?q=ai:lin.mingbao"Ji, Rongrong"https://www.zbmath.org/authors/?q=ai:ji.rongrong"Liu, Hong"https://www.zbmath.org/authors/?q=ai:liu.hong.1"Sun, Xiaoshuai"https://www.zbmath.org/authors/?q=ai:sun.xiaoshuai"Chen, Shen"https://www.zbmath.org/authors/?q=ai:chen.shen"Tian, Qi"https://www.zbmath.org/authors/?q=ai:tian.qiSummary: Online image hashing has attracted increasing research attention recently, which receives large-scale data in a streaming manner to update the hash functions on-the-fly. Its key challenge lies in the difficulty of balancing the learning timeliness and model accuracy. To this end, most works follow a supervised setting, i.e., using class labels to boost the hashing performance, which defects in two aspects: first, strong constraints, e.g., orthogonal or similarity preserving, are used, which however are typically relaxed and lead to large accuracy drops. Second, large amounts of training batches are required to learn the up-to-date hash functions, which largely increase the learning complexity. To handle the above challenges, a novel supervised online hashing scheme termed \textbf{H}adamard \textbf{M}atrix Guided \textbf{O}nline \textbf{H}ashing (HMOH) is proposed in this paper. Our key innovation lies in introducing Hadamard matrix, which is an orthogonal binary matrix built via Sylvester method. In particular, to release the need of strong constraints, we regard each column of Hadamard matrix as the target code for each class label, which by nature satisfies several desired properties of hashing codes. To accelerate the online training, LSH is first adopted to align the lengths of target code and to-be-learned binary code. We then treat the learning of hash functions as a set of binary classification problems to fit the assigned target code. Finally, extensive experiments on four widely-used benchmarks demonstrate the superior accuracy and efficiency of HMOH over various state-of-the-art methods. Codes can be available at \url{https://github.com/lmbxmu/mycode}.An alternating nonmonotone projected Barzilai-Borwein algorithm of nonnegative factorization of big matriceshttps://www.zbmath.org/1483.683072022-05-16T20:40:13.078697Z"Li, Ting"https://www.zbmath.org/authors/?q=ai:li.ting"Tang, Jiayi"https://www.zbmath.org/authors/?q=ai:tang.jiayi"Wan, Zhong"https://www.zbmath.org/authors/?q=ai:wan.zhongSummary: In this paper, a new alternating nonmonotone projected Barzilai-Borwein (BB) algorithm is developed for solving large scale problems of nonnegative matrix factorization. Unlike the existing algorithms available in the literature, a nonmonotone line search strategy is proposed to find suitable step lengths, and an adaptive BB spectral parameter is employed to generate search directions such that the constructed subproblems are efficiently solved. Apart from establishment of global convergence for this algorithm, numerical tests on three synthetic datasets, four public face image datasets and a real-world transcriptomic dataset are conducted to show advantages of the developed algorithm in this paper. It is concluded that in terms of numerical efficiency, noise robustness and quality of matrix factorization, our algorithm is promising and applicable to face image reconstruction, and deep mining of transcriptomic profiles of the sub-genomes in hybrid fish lineage, compared with the state-of-the-art algorithms.Learning multifunctional binary codes for personalized image retrievalhttps://www.zbmath.org/1483.683082022-05-16T20:40:13.078697Z"Liu, Haomiao"https://www.zbmath.org/authors/?q=ai:liu.haomiao"Wang, Ruiping"https://www.zbmath.org/authors/?q=ai:wang.ruiping"Shan, Shiguang"https://www.zbmath.org/authors/?q=ai:shan.shiguang"Chen, Xilin"https://www.zbmath.org/authors/?q=ai:chen.xilinSummary: Due to the highly complex semantic information of images, even with the same query image, the expected content-based image retrieval results could be very different and personalized in different scenarios. However, most existing hashing methods only preserve one single type of semantic similarity, making them incapable of addressing such realistic retrieval tasks. To deal with this problem, we propose a unified hashing framework to encode multiple types of information into the binary codes by exploiting convolutional networks (CNNs). Specifically, we assume that typical retrieval tasks are generally defined in two aspects, i.e. high-level semantics (e.g. object categories) and visual attributes (e.g. object shape and color). To this end, our Dual Purpose Hashing model is trained to jointly preserve two kinds of similarities characterizing the two aspects respectively. Moreover, since images with both category and attribute labels are scarce, our model is carefully designed to leverage the abundant partially labelled data as training inputs to alleviate the risk of overfitting. With such a framework, the binary codes of new-coming images can be readily obtained by quantizing the outputs of a specific CNN layer, and different retrieval tasks can be achieved by using the binary codes in different ways. Experiments on two large-scale datasets show that our method achieves comparable or even better performance than those state-of-the-art methods specifically designed for each individual retrieval task while being more compact than the compared methods.Weakly-supervised semantic guided hashing for social image retrievalhttps://www.zbmath.org/1483.683092022-05-16T20:40:13.078697Z"Li, Zechao"https://www.zbmath.org/authors/?q=ai:li.zechao"Tang, Jinhui"https://www.zbmath.org/authors/?q=ai:tang.jinhui"Zhang, Liyan"https://www.zbmath.org/authors/?q=ai:zhang.liyan"Yang, Jian"https://www.zbmath.org/authors/?q=ai:yang.jianSummary: Hashing has been widely investigated for large-scale image retrieval due to its search effectiveness and computation efficiency. In this work, we propose a novel Semantic Guided Hashing method coupled with binary matrix factorization to perform more effective nearest neighbor image search by simultaneously exploring the weakly-supervised rich community-contributed information and the underlying data structures. To uncover the underlying semantic information from the weakly-supervised user-provided tags, the binary matrix factorization model is leveraged for learning the binary features of images while the problem of imperfect tags is well addressed. The uncovered semantic information enables to well guide the discrete hash code learning. The underlying data structures are discovered by adaptively learning a discriminative data graph, which makes the learned hash codes preserve the meaningful neighbors. To the best of our knowledge, the proposed method is the first work that incorporates the hash code learning, the semantic information mining and the data structure discovering into one unified framework. Besides, the proposed method is extended to one deep approach for the optimal compatibility of discriminative feature learning and hash code learning. Experiments are conducted on two widely-used social image datasets and the proposed method achieves encouraging performance compared with the state-of-the-art hashing methods.Recruitment-imitation mechanism for evolutionary reinforcement learninghttps://www.zbmath.org/1483.683102022-05-16T20:40:13.078697Z"Lü, Shuai"https://www.zbmath.org/authors/?q=ai:lu.shuai.1|lu.shuai"Han, Shuai"https://www.zbmath.org/authors/?q=ai:han.shuai"Zhou, Wenbo"https://www.zbmath.org/authors/?q=ai:zhou.wenbo"Zhang, Junwei"https://www.zbmath.org/authors/?q=ai:zhang.junweiSummary: Reinforcement learning, evolutionary algorithms and imitation learning are three principal methods to deal with continuous control tasks. Reinforcement learning is sample efficient, yet sensitive to hyperparameters settings and needs efficient exploration; Evolutionary algorithms are stable, but with low sample efficiency; Imitation learning is both sample efficient and stable, however it requires the guidance of expert data. In this paper, we propose Recruitment-imitation Mechanism (RIM) for evolutionary reinforcement learning, a scalable framework that combines advantages of the three methods mentioned above. The core of this framework is a dual-actors and single critic reinforcement learning agent. This agent can recruit high-fitness actors from the population performing evolutionary algorithms, which instructs itself to learn from experience replay buffer. At the same time, low-fitness actors in the evolutionary population can imitate behavior patterns of the reinforcement learning agent and promote their fitness level. Reinforcement and imitation learners in this framework can be replaced with any off-policy actor-critic reinforcement learner and data-driven imitation learner. We evaluate RIM on a series of benchmarks for continuous control tasks in Mujoco. The experimental results show that RIM outperforms prior evolutionary or reinforcement learning methods. The performance of RIM's components is significantly better than components of previous evolutionary reinforcement learning algorithm, and the recruitment using soft update enables reinforcement learning agent to learn faster than that using hard update.Performance of Johnson-Lindenstrauss transform for \(k\)-means and \(k\)-medians clusteringhttps://www.zbmath.org/1483.683112022-05-16T20:40:13.078697Z"Makarychev, Konstantin"https://www.zbmath.org/authors/?q=ai:makarychev.konstantin-s"Makarychev, Yury"https://www.zbmath.org/authors/?q=ai:makarychev.yury-s"Razenshteyn, Ilya"https://www.zbmath.org/authors/?q=ai:razenshteyn.ilya-pTrain and test tightness of LP relaxations in structured predictionhttps://www.zbmath.org/1483.683122022-05-16T20:40:13.078697Z"Meshi, Ofer"https://www.zbmath.org/authors/?q=ai:meshi.ofer"London, Ben"https://www.zbmath.org/authors/?q=ai:london.ben"Weller, Adrian"https://www.zbmath.org/authors/?q=ai:weller.adrian"Sontag, David"https://www.zbmath.org/authors/?q=ai:sontag.davidSummary: Structured prediction is used in areas including computer vision and natural language processing to predict structured outputs such as segmentations or parse trees. In these settings, prediction is performed by MAP inference or, equivalently, by solving an integer linear program. Because of the complex scoring functions required to obtain accurate predictions, both learning and inference typically require the use of approximate solvers. We propose a theoretical explanation for the striking observation that approximations based on linear programming (LP) relaxations are often tight (exact) on real-world instances. In particular, we show that learning with LP relaxed inference encourages integrality of training instances, and that this training tightness generalizes to test data.Robust distribution-based nonnegative matrix factorizations for dimensionality reductionhttps://www.zbmath.org/1483.683132022-05-16T20:40:13.078697Z"Peng, Xinjun"https://www.zbmath.org/authors/?q=ai:peng.xinjun"Xu, Dong"https://www.zbmath.org/authors/?q=ai:xu.dong"Chen, De"https://www.zbmath.org/authors/?q=ai:chen.deSummary: As a popular dimensionality-reduction technique, nonnegative matrix factorization (NMF) has been widely researched since it is consistent with human cognitive processes in the psychology and physiology. This paper presents a novel NMF framework, called robust distribution-based NMF (RDNMF), to learn the robustly discriminative representations for data. In this RDNMF, a Kullback-Leibler divergence to measure the similarity between the data and representations is introduced, which fully preserves the geometrical structure of data. Meanwhile, this RDNMF employs the \(l_{2, 1}\)-norm loss to reduce the influence of noise and outliers. This paper further proposes a semi-supervised RDNMF (SRDNMF) by enforcing the representations of labeled points in the same class to be aligned on the same axis. The proposed RDNMF and SRDNMF are solved by the modified multiplicative update rules. Clustering experiments on seven benchmark datasets demonstrate the effectiveness of our methods in comparison to other state-of-the-art methods.Dual-channel hybrid community detection in attributed networkshttps://www.zbmath.org/1483.683142022-05-16T20:40:13.078697Z"Qin, Meng"https://www.zbmath.org/authors/?q=ai:qin.meng"Lei, Kai"https://www.zbmath.org/authors/?q=ai:lei.kaiSummary: This study considers the problem of hybrid community detection in attributed networks based on the information of network topology and attributes with the aim to address the following two shortcomings of existing hybrid community detection methods. First, many of these methods are based on the assumption that network topology and attributes carry consistent information but ignore the intrinsic mismatch correlation between them. Second, network topology is typically treated as the dominant source of information, with attributes employed as the auxiliary source; the dominant effect of attributes is seldom explored or indeed considered. To address these limitations, this paper presents a novel Dual-channel Hybrid Community Detection (DHCD) method that considers the dominant effects of topology and attributes separately. The concept of transition relation between the topology and attribute clusters is introduced to explore the mismatch correlation between the two sources and learn the behavioral and content diversity of nodes. An extended overlapping community detection algorithm is introduced based on the two types of diversity. By utilizing network attributes, DHCD can simultaneously derive the community partitioning membership and corresponding semantic descriptions. The superiority of DHCD over state-of-the-art community detection methods is demonstrated on a set of synthetic and real-world networks.Structure learning for relational logistic regression: an ensemble approachhttps://www.zbmath.org/1483.683152022-05-16T20:40:13.078697Z"Ramanan, Nandini"https://www.zbmath.org/authors/?q=ai:ramanan.nandini"Kunapuli, Gautam"https://www.zbmath.org/authors/?q=ai:kunapuli.gautam"Khot, Tushar"https://www.zbmath.org/authors/?q=ai:khot.tushar"Fatemi, Bahare"https://www.zbmath.org/authors/?q=ai:fatemi.bahare"Kazemi, Seyed Mehran"https://www.zbmath.org/authors/?q=ai:kazemi.seyed-mehran"Poole, David"https://www.zbmath.org/authors/?q=ai:poole.david-l"Kersting, Kristian"https://www.zbmath.org/authors/?q=ai:kersting.kristian"Natarajan, Sriraam"https://www.zbmath.org/authors/?q=ai:natarajan.sriraamSummary: We consider the problem of learning Relational Logistic Regression (RLR). Unlike standard logistic regression, the features of RLR are first-order formulae with associated weight vectors instead of scalar weights. We turn the problem of learning RLR to learning these vector-weighted formulae and develop a learning algorithm based on the recently successful functional-gradient boosting methods for probabilistic logic models. We derive the functional gradients and show how weights can be learned simultaneously in an efficient manner. Our empirical evaluation on standard data sets demonstrates the superiority of our approach over other methods for learning RLR.A nearest-neighbor search model for distance metric learninghttps://www.zbmath.org/1483.683162022-05-16T20:40:13.078697Z"Ruan, Yibang"https://www.zbmath.org/authors/?q=ai:ruan.yibang"Xiao, Yanshan"https://www.zbmath.org/authors/?q=ai:xiao.yanshan"Hao, Zhifeng"https://www.zbmath.org/authors/?q=ai:hao.zhifeng"Liu, Bo"https://www.zbmath.org/authors/?q=ai:liu.bo.3|liu.bo.1|liu.bo.4|liu.bo|liu.bo.2Summary: Distance metric learning aims to deal with the data distribution by learning a suitable distance metric from the training instances. For distance metric learning, the optimization constraints can be constructed based on the similar and dissimilar instance pairs. The instance pairs are generated by selecting the nearest-neighbors for each training instance. However, most methods select the same and fixed nearest-neighbor number for different training instances, which may limit performance for learning distance metric. In this paper, we propose a nearest-neighbor search model for distance metric learning (NNS-DML), which is capable of constructing the metric optimization constraints by searching different optimal nearest-neighbor numbers for different training instances. Specifically, we formulate a nearest-neighbor search matrix to contain the nearest-neighbor correlations of all training instances. Using the search matrix, we can construct and weight the metric optimization constraints of each training instance, such that the influence of its irrelevant features for its corresponding similar and dissimilar instance pairs can be reduced. Moreover, we develop a \(k\)-free nearest-neighbor model for classification problems via the SVM solver, which can ignore the setting of \(k\). Extensive experiments show that the proposed NNS-DML method outperforms the state-of-the-art distance metric learning methods.A guided FP-growth algorithm for mining multitude-targeted item-sets and class association rules in imbalanced datahttps://www.zbmath.org/1483.683172022-05-16T20:40:13.078697Z"Shabtay, Lior"https://www.zbmath.org/authors/?q=ai:shabtay.lior"Fournier-Viger, Philippe"https://www.zbmath.org/authors/?q=ai:fournier-viger.philippe"Yaari, Rami"https://www.zbmath.org/authors/?q=ai:yaari.rami"Dattner, Itai"https://www.zbmath.org/authors/?q=ai:dattner.itaiSummary: Identifying frequent item-sets is a popular data-mining task. It consists of finding sets of items frequently appearing in data. Yet, finding all frequent item-sets in large or dense datasets may be time-consuming, and a user may be interested merely in some specific item-sets rather than all of them. Recently, methods have been proposed for targeted item-set mining; that is to calculate the support of some item-sets of interest. Though this approach is often more suitable for real applications than traditional item-set mining approaches, performance remains an issue. To address that issue, this paper presents a novel algorithm for multitude-targeted mining, named Guided Frequent Pattern-Growth (GFP-Growth). The GFP-Growth algorithm is designed to quickly mine a given set of item-sets using a small amount of memory. This paper proves that GFP-Growth yields the exact frequency-counts for each item-set of interest. It further shows that GFP-Growth can boost the performance for several problems requiring item-set mining. We specifically study the problem of generating minority-class rules from imbalanced data and develop the Minority-Report Algorithm (MRA) that uses GFP-Growth to solve this problem efficiently. We prove several theoretical properties of MRA and present experimental results showing substantial performance gain.TSK-Streams: learning TSK fuzzy systems for regression on data streamshttps://www.zbmath.org/1483.683182022-05-16T20:40:13.078697Z"Shaker, Ammar"https://www.zbmath.org/authors/?q=ai:shaker.ammar"Hüllermeier, Eyke"https://www.zbmath.org/authors/?q=ai:hullermeier.eykeSummary: The problem of adaptive learning from evolving and possibly non-stationary data streams has attracted a lot of interest in machine learning in the recent past, and also stimulated research in related fields, such as computational intelligence and fuzzy systems. In particular, several rule-based methods for the incremental induction of regression models have been proposed. In this paper, we develop a method that combines the strengths of two existing approaches rooted in different learning paradigms. More concretely, our method adopts basic principles of the state-of-the-art learning algorithm AMRules and enriches them by the representational advantages of fuzzy rules. In a comprehensive experimental study, TSK-Streams is shown to be highly competitive in terms of performance.Random feature-based online multi-kernel learning in environments with unknown dynamicshttps://www.zbmath.org/1483.683192022-05-16T20:40:13.078697Z"Shen, Yanning"https://www.zbmath.org/authors/?q=ai:shen.yanning"Chen, Tianyi"https://www.zbmath.org/authors/?q=ai:chen.tianyi"Giannakis, Georgios B."https://www.zbmath.org/authors/?q=ai:giannakis.georgios-bSummary: Kernel-based methods exhibit well-documented performance in various nonlinear learning tasks. Most of them rely on a preselected kernel, whose prudent choice presumes task-specific prior information. Especially when the latter is not available, multi-kernel learning has gained popularity thanks to its flexibility in choosing kernels from a prescribed kernel dictionary. Leveraging the random feature approximation and its recent orthogonality-promoting variant, the present contribution develops a scalable multi-kernel learning scheme (termed Raker) to obtain the sought nonlinear learning function `on the fly,' first for static environments. To further boost performance in dynamic environments, an adaptive multi-kernel learning scheme (termed AdaRaker) is developed. AdaRaker accounts not only for data-driven learning of kernel combination, but also for the unknown dynamics. Performance is analyzed in terms of both static and dynamic regrets. AdaRaker is uniquely capable of tracking nonlinear
learning
functions in environments with unknown dynamics, and with with analytic performance guarantees Tests with synthetic and real datasets are carried out to showcase the effectiveness of the novel algorithms.Determining the number of latent factors in statistical multi-relational learninghttps://www.zbmath.org/1483.683202022-05-16T20:40:13.078697Z"Shi, Chengchun"https://www.zbmath.org/authors/?q=ai:shi.chengchun"Lu, Wenbin"https://www.zbmath.org/authors/?q=ai:lu.wenbin"Song, Rui"https://www.zbmath.org/authors/?q=ai:song.ruiSummary: Statistical relational learning is primarily concerned with learning and inferring relationships between entities in large-scale knowledge graphs. \textit{M. Nickel}, \textit{V. Tresp} and \textit{H.-P. Kriegel} [``A three-way model for collective learning on multi-relational data'', in: Proceedings of the 28th international conference on machine learning, ICML-11, Bellevue, Washington, USA, June 28 -- July 2, 2011. Madison, WI: Omnipress. 809--816 (2011)] proposed a RESCAL tensor factorization model for statistical relational learning, which achieves better or at least comparable results on common benchmark data sets when compared to other state-of-the-art methods. Given a positive integer \(s\), RESCAL computes an \(s\)-dimensional latent vector for each entity. The latent factors can be further used for solving relational learning tasks, such as collective classification, collective entity resolution and link-based clustering. \par The focus of this paper is to determine the number of latent factors in the RESCAL model. Due to the structure of the RESCAL model, its log-likelihood function is not concave. As a result, the corresponding maximum likelihood estimators (MLEs) may not be consistent. Nonetheless, we design a specific pseudometric, prove the consistency of the MLEs
under this
pseudometric and establish its rate of convergence. Based on these results, we propose a general class of information criteria and prove their model selection consistencies when the number of relations is either bounded or diverges at a proper rate of the number of entities. Simulations and real data examples show that our proposed information criteria have good finite sample properties.Binarization of the swallow swarm optimization for feature selectionhttps://www.zbmath.org/1483.683212022-05-16T20:40:13.078697Z"Slezkin, A. O."https://www.zbmath.org/authors/?q=ai:slezkin.a-o"Hodashinsky, I. A."https://www.zbmath.org/authors/?q=ai:hodashinsky.i-a"Shelupanov, A. A."https://www.zbmath.org/authors/?q=ai:shelupanov.alexander-aSummary: In this paper, we propose six methods for binarization of the swallow swarm optimization (SSO) algorithm to solve the feature selection problem. The relevance of the selected feature subsets is estimated by two classifiers: a fuzzy rule-based classifier and a classifier based on \(k\)-nearest neighbors. To find an optimal subset of features, we take into account the number of features and classification accuracy. The developed algorithms are tested on datasets from the KEEL repository. For the statistical evaluation of the binarization methods, we use Friedman's two-way analysis of variance by ranks for related samples. The best feature selection result is shown by a hybrid method based on modified algebraic operations and MERGE operation introduced by the authors of this paper. The best classification accuracy is achieved with a V-shaped transfer function.A novel boosting algorithm for multi-task learning based on the Itakuda-Saito divergencehttps://www.zbmath.org/1483.683222022-05-16T20:40:13.078697Z"Takenouchi, Takashi"https://www.zbmath.org/authors/?q=ai:takenouchi.takashi"Komori, Osamu"https://www.zbmath.org/authors/?q=ai:komori.osamu"Eguchi, Shinto"https://www.zbmath.org/authors/?q=ai:eguchi.shintoSummary: In this paper, we propose a novel multi-task learning algorithm based on an ensemble learning method. We consider a specific setting of the multi-task learning for binary classification problems, in which features are shared among all tasks and all tasks are targets of performance improvement. We focus on a situation that the shared structures among dataset are represented by divergence between underlying distributions associated with multiple tasks. We discuss properties of the proposed method and investigate validity of the proposed method with numerical experiments.
For the entire collection see [Zbl 1470.00021].Coupling loss and self-used privileged information guided multi-view transfer learninghttps://www.zbmath.org/1483.683232022-05-16T20:40:13.078697Z"Tang, Jingjing"https://www.zbmath.org/authors/?q=ai:tang.jingjing"He, Yiwei"https://www.zbmath.org/authors/?q=ai:he.yiwei"Tian, Yingjie"https://www.zbmath.org/authors/?q=ai:tian.yingjie"Liu, Dalian"https://www.zbmath.org/authors/?q=ai:liu.dalian"Kou, Gang"https://www.zbmath.org/authors/?q=ai:kou.gang"Alsaadi, Fawaz E."https://www.zbmath.org/authors/?q=ai:alsaadi.fawaz-eSummary: Transfer learning builds models for the target domain by leveraging the information from another related source domain, in which the distributions of two domains are usually quite distinct. Real-world data are often characterized by multiple representations known as multi-view features. In the multi-view transfer learning field, existing methods aim to address the following two issues. Firstly, due to the distributional difference between the two domains, the classifier trained on the source domain may underperform on the target domain. Moreover, the lack of data from the target domain generally occurs in the training phase. Secondly, how to fully exploit the relations among multiple features is challenging when such multi-view representations emerge in the source and target domains. In this paper, we propose a new coupling loss and self-used privileged information guided multi-view transfer learning method (MVTL-CP). The first issue is addressed by utilizing the weighted labeled data from the source domain to learn a precise classifier for the target domain. Following the consensus and complementarity principles, we tackle the second issue by making the best use of multiple views. Furthermore, we analyze the consistency between views and the generalization capability of MVTL-CP. Comprehensive experiments confirm the effectiveness of our proposed model.Online aggregation of unbounded losses using shifting experts with confidencehttps://www.zbmath.org/1483.683242022-05-16T20:40:13.078697Z"V'yugin, Vladimir"https://www.zbmath.org/authors/?q=ai:vyugin.vladimir-v"Trunov, Vladimir"https://www.zbmath.org/authors/?q=ai:trunov.vladimirSummary: We develop the setting of sequential prediction based on shifting experts and on a ``smooth'' version of the method of specialized experts. To aggregate expert predictions, we use the AdaHedge algorithm, which is a version of the Hedge algorithm with adaptive learning rate, and extend it by the meta-algorithm Fixed Share. Due to this, we combine the advantages of both algorithms: (1) we use the shifting regret which is a more optimal characteristic of the algorithm; (2) regret bounds are valid in the case of signed unbounded losses of the experts. Also, (3) we incorporate in this scheme a ``smooth'' version of the method of specialized experts which allows us to make more flexible and accurate predictions. All results are obtained in the adversarial setting -- no assumptions are made about the nature of the data source. We present results of numerical experiments for short-term forecasting of electricity consumption based on real data.Hybrid MPI/OpenMP parallel asynchronous distributed alternating direction method of multipliershttps://www.zbmath.org/1483.683252022-05-16T20:40:13.078697Z"Wang, Dongxia"https://www.zbmath.org/authors/?q=ai:wang.dongxia"Lei, Yongmei"https://www.zbmath.org/authors/?q=ai:lei.yongmei"Zhou, Jianhui"https://www.zbmath.org/authors/?q=ai:zhou.jianhuiSummary: The distributed alternating direction method of multipliers (ADMM) is one of the most widely used algorithms to solve large-scale optimization problems. Since the memory consumption, communication cost and convergence of the distributed ADMM are affected by the number of workers, how to improve the scalability of the distributed ADMM is one of the main challenges. To address this challenge, this paper proposes an asynchronous distributed ADMM based on the hybrid parallel model (HPAD-ADMM), which uses OpenMP for parallelization inside the node and MPI for message passing between nodes in the distributed system. Each worker solves sub-problem in parallel by multithreading, which reduces the system time at each iteration without affecting the convergence of the system or increasing the communication cost and memory consumption. Furthermore, this paper designs efficient parallelized algorithms to solve sub-problems for different applications. For the L1-regularized logistic regression problem, the sub-problem is solved by parallel trust region newton method and system time is reduced by adjusting the accuracy of the sub-problem. For the lasso problem, parallel matrix inversion algorithms are selected dynamically to reduce the system time according to the size of the data set. Finally, large-scale data sets are used to test the performance of the HPAD-ADMM. Experimental results show that compared with the state-of-the-art distributed ADMM, the HPAD-ADMM has higher scalability without losing accuracy.Fast generalization rates for distance metric learning. Improved theoretical analysis for smooth strongly convex distance metric learninghttps://www.zbmath.org/1483.683262022-05-16T20:40:13.078697Z"Ye, Han-Jia"https://www.zbmath.org/authors/?q=ai:ye.hanjia"Zhan, De-Chuan"https://www.zbmath.org/authors/?q=ai:zhan.dechuan"Jiang, Yuan"https://www.zbmath.org/authors/?q=ai:jiang.yuanSummary: Distance metric learning (DML) aims to find a suitable measure to compute a distance between instances. Facilitated by side information, the learned metric can often improve the performance of similarity or distance based methods such as \(k\)NN. Theoretical analyses of DML focus on the learning effectiveness for squared Mahalanobis distance. Specifically, whether the Mahalanobis metric learned from the empirically sampled pairwise constraints is in accordance with the optimal metric optimized over the paired samples generated from the true distribution, and the sample complexity of this process. The excess risk could measure the quality of the generalization, i.e., the gap between the expected objective of empirical metric learned from a regularized objective with convex loss function and the one with the optimal metric. Given \(N\) training examples, existing analyses over this non-i.i.d. learning problem have proved the excess risk of DML converges to zero at a rate of \({\mathcal {O}}\left( \frac{1}{\sqrt{N}}\right) \). In this paper, we obtain a faster convergence rate of DML, \({\mathcal {O}}\left( \frac{1}{N}\right) \), when learning the distance metric with a smooth loss function and a strongly convex objective. In addition, when the problem is relatively easy, and the number of training samples is large enough, this rate can be further improved to \({\mathcal {O}}\left( \frac{1}{N^2}\right) \). Synthetic experiments validate that DML can achieve the specified faster generalization rate, and results under various settings help explore the theoretical properties of DML a lot.Function mining based on gene expression programming -- convergency analysis and remnant-guided evolution algorithmhttps://www.zbmath.org/1483.683272022-05-16T20:40:13.078697Z"Yuan, Chang An"https://www.zbmath.org/authors/?q=ai:yuan.changan"Tang, Chang Jie"https://www.zbmath.org/authors/?q=ai:tang.changjie"Zuo, Jie"https://www.zbmath.org/authors/?q=ai:zuo.jie"Xie, Fang Jun"https://www.zbmath.org/authors/?q=ai:xie.fangjun"Chen, An Long"https://www.zbmath.org/authors/?q=ai:chen.anlong"Hu, Jian Jun"https://www.zbmath.org/authors/?q=ai:hu.jianjun(no abstract)Tensorized multi-view subspace representation learninghttps://www.zbmath.org/1483.683282022-05-16T20:40:13.078697Z"Zhang, Changqing"https://www.zbmath.org/authors/?q=ai:zhang.changqing"Fu, Huazhu"https://www.zbmath.org/authors/?q=ai:fu.huazhu"Wang, Jing"https://www.zbmath.org/authors/?q=ai:wang.jing.3|wang.jing|wang.jing.16|wang.jing.5|wang.jing.15|wang.jing.14|wang.jing.13|wang.jing.6|wang.jing.17|wang.jing.18|wang.jing.2"Li, Wen"https://www.zbmath.org/authors/?q=ai:li.wen.2|li.wen.1|li.wen"Cao, Xiaochun"https://www.zbmath.org/authors/?q=ai:cao.xiaochun"Hu, Qinghua"https://www.zbmath.org/authors/?q=ai:hu.qinghuaSummary: Self-representation based subspace learning has shown its effectiveness in many applications. In this paper, we promote the traditional subspace representation learning by simultaneously taking advantages of multiple views and prior constraint. Accordingly, we establish a novel algorithm termed as Tensorized Multi-view Subspace Representation Learning. To exploit different views, the subspace representation matrices of different views are regarded as a low-rank tensor, which effectively models the high-order correlations of multi-view data. To incorporate prior information, a constraint matrix is devised to guide the subspace representation learning within a unified framework. The subspace representation tensor equipped with a low-rank constraint models elegantly the complementary information among different views, reduces redundancy of subspace representations, and then improves the accuracy of subsequent tasks. We formulate the model with a tensor nuclear norm minimization problem constrained with \(\ell_{2,1}\)-norm and linear equalities. The minimization problem is efficiently solved by using an Augmented Lagrangian Alternating Direction Minimization method. Extensive experimental results on diverse multi-view datasets demonstrate the effectiveness of our algorithm.Multi-label feature selection based on the division of label topicshttps://www.zbmath.org/1483.683292022-05-16T20:40:13.078697Z"Zhang, Ping"https://www.zbmath.org/authors/?q=ai:zhang.ping.2|zhang.ping.1|zhang.ping.3|zhang.ping|zhang.ping.6|zhang.ping.5"Gao, Wanfu"https://www.zbmath.org/authors/?q=ai:gao.wanfu"Hu, Juncheng"https://www.zbmath.org/authors/?q=ai:hu.juncheng"Li, Yonghao"https://www.zbmath.org/authors/?q=ai:li.yonghaoSummary: Multi-label feature selection has attracted much attention from researchers and can reduce the high dimensionality of multi-label data. Previous multi-label methods consider the importance of labels equal, as a result, they choose the discriminative features based on the entire label set. In fact, there exists a latent semantic structure in the label set. Specifically, labels can be sorted into some central topics and some subordinate topics. Features regarding central topics should be chosen first and the number of them should be chosen more. To this end, we first explore the latent semantic structure according to spectral clustering. The labels are abstracted into several clusters named central clusters and subordinate clusters. Second, the importance of features with respect to the labels in each cluster is scored. Finally, we obtain the feature subset based on both the scores of features and the type of clusters. Comprehensive experiments demonstrate the superiority of the proposed method against seven state-of-the-art multi-label feature selection methods on fourteen benchmark multi-label data sets.Adversarially learned one-class novelty detection with confidence estimationhttps://www.zbmath.org/1483.683302022-05-16T20:40:13.078697Z"Zhang, Ying"https://www.zbmath.org/authors/?q=ai:zhang.ying.4|zhang.ying|zhang.ying.1|zhang.ying.5|zhang.ying.2|zhang.ying.3"Zhou, Baohang"https://www.zbmath.org/authors/?q=ai:zhou.baohang"Ding, Xiaoke"https://www.zbmath.org/authors/?q=ai:ding.xiaoke"Ouyang, Jiawei"https://www.zbmath.org/authors/?q=ai:ouyang.jiawei"Cai, Xiangrui"https://www.zbmath.org/authors/?q=ai:cai.xiangrui"Gao, Jinyang"https://www.zbmath.org/authors/?q=ai:gao.jinyang"Yuan, Xiaojie"https://www.zbmath.org/authors/?q=ai:yuan.xiaojieSummary: Given samples from a particular normal class, image novelty detection is aimed at determining whether a query sample is from the normal class. Images from this particular class are termed inliers or normal images, whereas images not belonging to the class are termed novelties. Most novelty detection approaches use deep neural networks. However, few of them are end-to-end, and state-of-the-art neural networks tend to be overconfident in their predictions. This may result in incorrect predictions and may negatively affect novelty detection. In this paper, we propose a novel model termed adversarially learned one-class novelty detection with confidence estimation for image novelty detection. The proposed model consists of a representation and a detection module, which are adversarially trained to collaboratively learn the inlier distribution. Moreover, the model uses confidence estimation so that the detection module can more effective. The proposed model is end-to-end and does not require additional calculations such as novelty scores after training. We conduct comprehensive experiments on four publicly available datasets that are commonly used for novelty detection, and the model is compared with state-of-the-art methods to demonstrate its performance.MBSVR: multiple birth support vector regressionhttps://www.zbmath.org/1483.683312022-05-16T20:40:13.078697Z"Zhang, Zichen"https://www.zbmath.org/authors/?q=ai:zhang.zichen"Ding, Shifei"https://www.zbmath.org/authors/?q=ai:ding.shifei"Sun, Yuting"https://www.zbmath.org/authors/?q=ai:sun.yutingSummary: Although the learning speed in twin support vector regression (TWSVR) is four times that in support vector regression (SVR), the computing time and fitting precision of TWSVR are limited. This paper develops multiple birth support vector regression (MBSVR), motivated by the multiple birth support vector machine (MBSVM) formulation. MBSVR constructs the final regressor from K hyperplanes, each of which is obtained by solving a small quadratic programming problem (QPP) with the associated constraints, in which all points in each of the corresponding class should be as far away as possible from its corresponding hyperplane. Since MBSVM can be seen as an extension of the twin support vector machine (TWSVM) and its computing time is less than that of TWSVM, the proposed MBSVR is also faster than TWSVR, especially when the number of classes \(K\) is large. To verify the performance of the proposed MBSVR, it is compared with TWSVR, TSVR (another form of twin support vector regression) and SVR on several synthetic datasets and UCI datasets.GADE: a generative adversarial approach to density estimation and its applicationshttps://www.zbmath.org/1483.683322022-05-16T20:40:13.078697Z"Abbasnejad, M. Ehsan"https://www.zbmath.org/authors/?q=ai:abbasnejad.m-ehsan"Shi, Javen"https://www.zbmath.org/authors/?q=ai:shi.javen"van den Hengel, Anton"https://www.zbmath.org/authors/?q=ai:van-den-hengel.anton"Liu, Lingqiao"https://www.zbmath.org/authors/?q=ai:liu.lingqiaoSummary: Density estimation is a challenging unsupervised learning problem. Current maximum likelihood approaches for density estimation are either restrictive or incapable of producing high-quality samples. On the other hand, likelihood-free models such as generative adversarial networks, produce sharp samples without a density model. The lack of a density estimate limits the applications to which the sampled data can be put, however. We propose a \textit{generative adversarial density estimator} (GADE), a density estimation approach that bridges the gap between the two. Allowing for a prior on the parameters of the model, we extend our density estimator to a Bayesian model where we can leverage the predictive variance to measure our confidence in the likelihood. Our experiments on challenging applications such as visual dialog or autonomous driving where the density and the confidence in predictions are crucial shows the effectiveness of our approach.MimicGAN: robust projection onto image manifolds with corruption mimickinghttps://www.zbmath.org/1483.683332022-05-16T20:40:13.078697Z"Anirudh, Rushil"https://www.zbmath.org/authors/?q=ai:anirudh.rushil"Thiagarajan, Jayaraman J."https://www.zbmath.org/authors/?q=ai:thiagarajan.jayaraman-j"Kailkhura, Bhavya"https://www.zbmath.org/authors/?q=ai:kailkhura.bhavya"Bremer, Peer-Timo"https://www.zbmath.org/authors/?q=ai:bremer.peer-timoSummary: In the past few years, Generative Adversarial Networks (GANs) have dramatically advanced our ability to represent and parameterize high-dimensional, non-linear image manifolds. As a result, they have been widely adopted across a variety of applications, ranging from challenging inverse problems like image completion, to problems such as anomaly detection and adversarial defense. A recurring theme in many of these applications is the notion of projecting an image observation onto the manifold that is inferred by the generator. In this context, Projected Gradient Descent (PGD) has been the most popular approach, which essentially optimizes for a latent vector that minimizes the discrepancy between a generated image and the given observation. However, PGD is a brittle optimization technique that fails to identify the right projection (or latent vector) when the observation is corrupted, or perturbed even by a small amount. Such corruptions are common in the real world, for example images in the wild come with unknown crops, rotations, missing pixels, or other kinds of non-linear distributional shifts which break current encoding methods, rendering downstream applications unusable. To address this, we propose corruption mimicking -- a new robust projection technique, that utilizes a surrogate network to approximate the unknown corruption directly at test time, without the need for additional supervision or data augmentation. The proposed method is significantly more robust than PGD and other competing methods under a wide variety of corruptions, thereby enabling a more effective use of GANs in real-world applications. More importantly, we show that our approach produces state-of-the-art performance in several GAN-based applications -- anomaly detection, domain adaptation, and adversarial defense, that benefit from an accurate projection.Discriminator feature-based inference by recycling the discriminator of GANshttps://www.zbmath.org/1483.683342022-05-16T20:40:13.078697Z"Bang, Duhyeon"https://www.zbmath.org/authors/?q=ai:bang.duhyeon"Kang, Seoungyoon"https://www.zbmath.org/authors/?q=ai:kang.seoungyoon"Shim, Hyunjung"https://www.zbmath.org/authors/?q=ai:shim.hyunjungSummary: Generative adversarial networks (GANs) successfully generate high quality data by learning a mapping from a latent vector to the data. Various studies assert that the latent space of a GAN is semantically meaningful and can be utilized for advanced data analysis and manipulation. To analyze the real data in the latent space of a GAN, it is necessary to build an inference mapping from the data to the latent vector. This paper proposes an effective algorithm to accurately infer the latent vector by utilizing GAN discriminator features. Our primary goal is to increase inference mapping accuracy with minimal training overhead. Furthermore, using the proposed algorithm, we suggest a conditional image generation algorithm, namely a spatially conditioned GAN. Extensive evaluations confirmed that the proposed inference algorithm achieved more semantically accurate inference mapping than existing methods and can be successfully applied to advanced conditional image generation tasks.Group invariance, stability to deformations, and complexity of deep convolutional representationshttps://www.zbmath.org/1483.683352022-05-16T20:40:13.078697Z"Bietti, Alberto"https://www.zbmath.org/authors/?q=ai:bietti.alberto"Mairal, Julien"https://www.zbmath.org/authors/?q=ai:mairal.julienSummary: The success of deep convolutional architectures is often attributed in part to their ability to learn multiscale and invariant representations of natural signals. However, a precise study of these properties and how they affect learning guarantees is still missing. In this paper, we consider deep convolutional representations of signals; we study their invariance to translations and to more general groups of transformations, their stability to the action of diffeomorphisms, and their ability to preserve signal information. This analysis is carried by introducing a multilayer kernel based on convolutional kernel networks and by studying the geometry induced by the kernel mapping. We then characterize the corresponding reproducing kernel Hilbert space (RKHS), showing that it contains a large class of convolutional neural networks with homogeneous activation functions. This analysis allows us to separate data representation from learning, and to provide a canonical measure of model complexity, the
RKHS
norm, which controls both stability and generalization of any learned model. In addition to models in the constructed RKHS, our stability analysis also applies to convolutional networks with generic activations such as rectified linear units, and we discuss its relationship with recent generalization bounds based on spectral norms.RoCGAN: robust conditional GANhttps://www.zbmath.org/1483.683362022-05-16T20:40:13.078697Z"Chrysos, Grigorios G."https://www.zbmath.org/authors/?q=ai:chrysos.grigorios-g"Kossaifi, Jean"https://www.zbmath.org/authors/?q=ai:kossaifi.jean"Zafeiriou, Stefanos"https://www.zbmath.org/authors/?q=ai:zafeiriou.stefanos-pSummary: Conditional image generation lies at the heart of computer vision and conditional generative adversarial networks (cGAN) have recently become the method of choice for this task, owing to their superior performance. The focus so far has largely been on performance improvement, with little effort in making cGANs more robust to noise. However, the regression (of the generator) might lead to arbitrarily large errors in the output, which makes cGANs unreliable for real-world applications. In this work, we introduce a novel conditional GAN model, called \textit{RoCGAN}, which leverages structure in the target space of the model to address the issue. Specifically, we augment the generator with an unsupervised pathway, which promotes the outputs of the generator to span the target manifold, even in the presence of intense noise. We prove that RoCGAN share similar theoretical properties as GAN and establish with both synthetic and real data the merits of our model. We perform a thorough experimental validation on large scale datasets for natural scenes and faces and observe that our model outperforms existing cGAN architectures by a large margin. We also empirically demonstrate the performance of our approach in the face of two types of noise (adversarial and Bernoulli).Pruning deep convolutional neural networks architectures with evolution strategyhttps://www.zbmath.org/1483.683372022-05-16T20:40:13.078697Z"Fernandes, Francisco E. jun."https://www.zbmath.org/authors/?q=ai:fernandes.francisco-e-jun"Yen, Gary G."https://www.zbmath.org/authors/?q=ai:yen.gary-gSummary: Currently, Deep Convolutional Neural Networks (DCNNs) are used to solve all kinds of problems in the field of machine learning and artificial intelligence due to their learning and adaptation capabilities. However, most successful DCNN models have a high computational complexity making them difficult to deploy on mobile or embedded platforms. This problem has prompted many researchers to develop algorithms and approaches to help reduce the computational complexity of such models. One of them is called filter pruning, where convolution filters are eliminated to reduce the number of parameters and, consequently, the computational complexity of the given model. In the present work, we propose a novel algorithm to perform filter pruning by using a Multi-Objective Evolution Strategy (MOES) algorithm, called DeepPruningES. Our approach avoids the need for using any knowledge during the pruning procedure and helps decision-makers by returning three pruned CNN models with different trade-offs between performance and computational complexity. We show that DeepPruningES can significantly reduce a model's computational complexity by testing it on three DCNN architectures: Convolutional Neural Networks (CNNs), Residual Neural Networks (ResNets), and Densely Connected Neural Networks (DenseNets).Theoretical investigation of generalization bounds for adversarial learning of deep neural networkshttps://www.zbmath.org/1483.683382022-05-16T20:40:13.078697Z"Gao, Qingyi"https://www.zbmath.org/authors/?q=ai:gao.qingyi"Wang, Xiao"https://www.zbmath.org/authors/?q=ai:wang.xiao|wang.xiao.2|wang.xiao.1The authors investigate a generalized behavior of adversarial learning through Rademacher complexity. The paper established three results:
\begin{itemize}
\item[1.] A tighter upper bound on the Rademacher complexity for the class of functions representable as DNN with spectral normalization
[\textit{T. Miyato} et al., ``Spectral normalization for generative adversarial networks'', Preprint, \url{arXiv:1802.05957}]
and low-rank weight matrices under the Fast Gradient Sign Method (FGSM), which is a commonly used adversarial training method. This means, theoretically, adversarial learning through FGSM is easier than previously proposed in the literature.
\item[2.] The authors also prove that adversarial training is never easier than natural training by showing that the Rademacher complexity for adversarial learning is greater than its natural learning counterpart.
\item[3.] The authors conducted experiments on synthetic dataset to verify the theoretical findings. In particular, they demonstrated that the Rademacher complexity of adversarial learning is independent of the depth of the network if the network has a low-rank weight matrix.
\end{itemize}
Reviewer: Hongshan Li (Mountain View)Tracking the race between deep reinforcement learning and imitation learninghttps://www.zbmath.org/1483.683392022-05-16T20:40:13.078697Z"Gros, Timo P."https://www.zbmath.org/authors/?q=ai:gros.timo-p"Höller, Daniel"https://www.zbmath.org/authors/?q=ai:holler.daniel"Hoffmann, Jörg"https://www.zbmath.org/authors/?q=ai:hoffmann.jorg|hoffmann.jorg.1"Wolf, Verena"https://www.zbmath.org/authors/?q=ai:wolf.verenaSummary: Learning-based approaches for solving large sequential decision making problems have become popular in recent years. The resulting agents perform differently and their characteristics depend on those of the underlying learning approach. Here, we consider a benchmark planning problem from the reinforcement learning domain, the Racetrack, to investigate the properties of agents derived from different deep (reinforcement) learning approaches. We compare the performance of deep supervised learning, in particular imitation learning, to reinforcement learning for the Racetrack model. We find that imitation learning yields agents that follow more risky paths. In contrast, the decisions of deep reinforcement learning are more foresighted, i.e., avoid states in which fatal decisions are more likely. Our evaluations show that for this sequential decision making problem, deep reinforcement learning performs best in many aspects even though for imitation learning optimal decisions are considered.
For the entire collection see [Zbl 1475.68022].Deterministic convergence analysis via smoothing group Lasso regularization and adaptive momentum for Sigma-Pi-Sigma neural networkhttps://www.zbmath.org/1483.683402022-05-16T20:40:13.078697Z"Kang, Qian"https://www.zbmath.org/authors/?q=ai:kang.qian"Fan, Qinwei"https://www.zbmath.org/authors/?q=ai:fan.qinwei"Zurada, Jacek M."https://www.zbmath.org/authors/?q=ai:zurada.jacek-mSummary: In this paper, we propose a sparse and accelerated method for Sigma-Pi-Sigma neural network training based on smoothing group lasso regularization and adaptive momentum. It is shown that group sparsity can more efficiently sparsity the network structure at the group level, and the adaptive momentum term can speed up the learning convergence during the iteration process. Also, another important contribution lies in the analysis of theoretical results. However, the group lasso regularization is not differentiable at the origin. This leads to oscillations observed in numerical experiments and poses a challenge to theoretical analysis. So, we overcome those problems by smoothing techniques. Under suitable assumptions, we strictly proved the monotonicity, and weak and strong convergence theorems of the new algorithm. Finally, the numerical experiments are presented to support our theoretical findings.Transferrable feature and projection learning with class hierarchy for zero-shot learninghttps://www.zbmath.org/1483.683412022-05-16T20:40:13.078697Z"Li, Aoxue"https://www.zbmath.org/authors/?q=ai:li.aoxue"Lu, Zhiwu"https://www.zbmath.org/authors/?q=ai:lu.zhiwu"Guan, Jiechao"https://www.zbmath.org/authors/?q=ai:guan.jiechao"Xiang, Tao"https://www.zbmath.org/authors/?q=ai:xiang.tao"Wang, Liwei"https://www.zbmath.org/authors/?q=ai:wang.liwei.1"Wen, Ji-Rong"https://www.zbmath.org/authors/?q=ai:wen.jirongSummary: Zero-shot learning (ZSL) aims to transfer knowledge from seen classes to unseen ones so that the latter can be recognised without any training samples. This is made possible by learning a projection function between a feature space and a semantic space (e.g. attribute space). Considering the seen and unseen classes as two domains, a big domain gap often exists which challenges ZSL. In this work, we propose a novel inductive ZSL model that leverages superclasses as the bridge between seen and unseen classes to narrow the domain gap. Specifically, we first build a class hierarchy of multiple superclass layers and a single class layer, where the superclasses are automatically generated by data-driven clustering over the semantic representations of all seen and unseen class names. We then exploit the superclasses from the class hierarchy to tackle the domain gap challenge in two aspects: deep feature learning and projection function learning. First, to narrow the domain gap in the feature space, we define a recurrent neural network over superclasses and then plug it into a convolutional neural network for enforcing the superclass hierarchy. Second, to further learn a transferrable projection function for ZSL, a novel projection function learning method is proposed by exploiting the superclasses to align the two domains. Importantly, our transferrable feature and projection learning methods can be easily extended to a closely related task -- few-shot learning (FSL). Extensive experiments show that the proposed model outperforms the state-of-the-art alternatives in both ZSL and FSL tasks.Multimodal image synthesis with conditional implicit maximum likelihood estimationhttps://www.zbmath.org/1483.683422022-05-16T20:40:13.078697Z"Li, Ke"https://www.zbmath.org/authors/?q=ai:li.ke"Peng, Shichong"https://www.zbmath.org/authors/?q=ai:peng.shichong"Zhang, Tianhao"https://www.zbmath.org/authors/?q=ai:zhang.tianhao"Malik, Jitendra"https://www.zbmath.org/authors/?q=ai:malik.jitendra-kumarSummary: Many tasks in computer vision and graphics fall within the framework of conditional image synthesis. In recent years, generative adversarial nets have delivered impressive advances in quality of synthesized images. However, it remains a challenge to generate both diverse and plausible images for the \textit{same} input, due to the problem of mode collapse. In this paper, we develop a new generic multimodal conditional image synthesis method based on implicit maximum likelihood estimation and demonstrate improved multimodal image synthesis performance on two tasks, single image super-resolution and image synthesis from scene layouts. We make our implementation publicly available.A general framework for deep supervised discrete hashinghttps://www.zbmath.org/1483.683432022-05-16T20:40:13.078697Z"Li, Qi"https://www.zbmath.org/authors/?q=ai:li.qi|li.qi.1"Sun, Zhenan"https://www.zbmath.org/authors/?q=ai:sun.zhenan"He, Ran"https://www.zbmath.org/authors/?q=ai:he.ran"Tan, Tieniu"https://www.zbmath.org/authors/?q=ai:tan.tieniuSummary: With the rapid growth of image and video data on the web, hashing has been extensively studied for image or video search in recent years. Benefiting from recent advances in deep learning, deep hashing methods have shown superior performance over the traditional hashing methods. However, there are some limitations of previous deep hashing methods (e.g., the semantic information is not fully exploited). In this paper, we develop a general deep supervised discrete hashing framework based on the assumption that the learned binary codes should be ideal for classification. Both the similarity information and the classification information are used to learn the hash codes within one stream framework. We constrain the outputs of the last layer to be binary codes directly, which is rarely investigated in deep hashing algorithms. Besides, both the pairwise similarity information and the triplet ranking information are exploited in this paper. In addition, two different loss functions are presented: \(l_2\) loss and hinge loss, which are carefully designed for the classification term under the one stream framework. Because of the discrete nature of hash codes, an alternating minimization method is used to optimize the objective function. Experimental results have shown that our approach outperforms current state-of-the-art methods on benchmark datasets.Rectified binary convolutional networks with generative adversarial learninghttps://www.zbmath.org/1483.683442022-05-16T20:40:13.078697Z"Liu, Chunlei"https://www.zbmath.org/authors/?q=ai:liu.chunlei"Ding, Wenrui"https://www.zbmath.org/authors/?q=ai:ding.wenrui"Hu, Yuan"https://www.zbmath.org/authors/?q=ai:hu.yuan"Zhang, Baochang"https://www.zbmath.org/authors/?q=ai:zhang.baochang"Liu, Jianzhuang"https://www.zbmath.org/authors/?q=ai:liu.jianzhuang"Guo, Guodong"https://www.zbmath.org/authors/?q=ai:guo.guodong"Doermann, David"https://www.zbmath.org/authors/?q=ai:doermann.davidSummary: Binarized convolutional neural networks (BNNs) are widely used to improve the memory and computational efficiency of deep convolutional neural networks for to be employed on embedded devices. However, existing BNNs fail to explore their corresponding full-precision models' potential, resulting in a significant performance gap. This paper introduces a Rectified Binary Convolutional Network (RBCN) by combining full precision kernels and feature maps to rectify the binarization process in a generative adversarial network (GAN) framework. We further prune our RBCNs using the GAN framework to increase the model efficiency and promote flexibly in practical applications. Extensive experiments validate the superior performance of the proposed RBCN over state-of-the-art BNNs on tasks such as object classification, object tracking, face recognition, and person re-identification.Adversarial confidence learning for medical image segmentation and synthesishttps://www.zbmath.org/1483.683452022-05-16T20:40:13.078697Z"Nie, Dong"https://www.zbmath.org/authors/?q=ai:nie.dong"Shen, Dinggang"https://www.zbmath.org/authors/?q=ai:shen.dinggangSummary: Generative adversarial networks (GAN) are widely used in medical image analysis tasks, such as medical image segmentation and synthesis. In these works, adversarial learning is directly applied to the original supervised segmentation (synthesis) networks. The usage of adversarial learning is effective in improving visual perception performance since adversarial learning works as realistic regularization for supervised generators. However, the quantitative performance often cannot improve as much as the qualitative performance, and it can even become worse in some cases. In this paper, we explore how we can take better advantage of adversarial learning in supervised segmentation (synthesis) models and propose an adversarial confidence learning framework to better model these problems. We analyze the roles of discriminator in the classic GANs and compare them with those in supervised adversarial systems. Based on this analysis, we propose adversarial confidence learning, i.e., besides the adversarial learning for emphasizing visual perception, we use the confidence information provided by the adversarial network to enhance the design of supervised segmentation (synthesis) network. In particular, we propose using a fully convolutional adversarial network for confidence learning to provide voxel-wise and region-wise confidence information for the segmentation (synthesis) network. With these settings, we propose a difficulty-aware attention mechanism to properly handle hard samples or regions by taking structural information into consideration so that we can better deal with the irregular distribution of medical data. Furthermore, we investigate the loss functions of various GANs and propose using the binary cross entropy loss to train the proposed adversarial system so that we can retain the unlimited modeling capacity of the discriminator. Experimental results on clinical and challenge datasets show that our proposed network can achieve state-of-the-art segmentation (synthesis) accuracy. Further analysis also indicates that adversarial confidence learning can both improve the visual perception performance and the quantitative performance.SSN: learning sparse switchable normalization via SparsestMaxhttps://www.zbmath.org/1483.683462022-05-16T20:40:13.078697Z"Shao, Wenqi"https://www.zbmath.org/authors/?q=ai:shao.wenqi"Li, Jingyu"https://www.zbmath.org/authors/?q=ai:li.jingyu"Ren, Jiamin"https://www.zbmath.org/authors/?q=ai:ren.jiamin"Zhang, Ruimao"https://www.zbmath.org/authors/?q=ai:zhang.ruimao"Wang, Xiaogang"https://www.zbmath.org/authors/?q=ai:wang.xiaogang"Luo, Ping"https://www.zbmath.org/authors/?q=ai:luo.pingSummary: Normalization method deals with parameters training of convolution neural networks (CNNs) in which there are often multiple convolution layers. Despite the fact that layers in CNN are not homogeneous in the role they play at representing a prediction function, existing works often employ identical normalizer in different layers, making performance away from idealism. To tackle this problem and further boost performance, a recently-proposed switchable normalization (SN) provides a new perspective for deep learning: it learns to select different normalizers for different convolution layers of a ConvNet. However, SN uses softmax function to learn importance ratios to combine normalizers, not only leading to redundant computations compared to a single normalizer but also making model less interpretable. This work addresses this issue by presenting sparse switchable normalization (SSN) where the importance ratios are constrained to be sparse. Unlike \(\ell_1\) and \(\ell_0\) regularizations that impose difficulties in tuning layer-wise regularization coefficients, we turn this sparse-constrained optimization problem into feed-forward computation by proposing SparsestMax, which is a sparse version of softmax. SSN has several appealing properties. (1) It inherits all benefits from SN such as applicability in various tasks and robustness to a wide range of batch sizes. (2) It is guaranteed to select only one normalizer for each normalization layer, avoiding redundant computations and improving interpretability of normalizer selection. (3) SSN can be transferred to various tasks in an end-to-end manner. Extensive experiments show that SSN outperforms its counterparts on various challenging benchmarks such as ImageNet, COCO, Cityscapes, ADE20K, Kinetics and MegaFace. Models and code are available at \url{https://github.com/switchablenorms/Sparse_SwitchNorm}.Anchor-based self-ensembling for semi-supervised deep pairwise hashinghttps://www.zbmath.org/1483.683472022-05-16T20:40:13.078697Z"Shi, Xiaoshuang"https://www.zbmath.org/authors/?q=ai:shi.xiaoshuang"Guo, Zhenhua"https://www.zbmath.org/authors/?q=ai:guo.zhenhua"Xing, Fuyong"https://www.zbmath.org/authors/?q=ai:xing.fuyong"Liang, Yun"https://www.zbmath.org/authors/?q=ai:liang.yun"Yang, Lin"https://www.zbmath.org/authors/?q=ai:yang.linSummary: Deep hashing has attracted considerable attention to tackle large-scale retrieval tasks, because of automatic and powerful feature extraction of convolutional neural networks and the gain of hashing in computation and storage costs. Most current supervised deep hashing methods only utilize the semantic information of labeled data without exploiting unlabeled data. However, data annotation is expensive and thus only scarce labeled data are available, which are difficult to represent the true distribution of all data. In this paper, we propose a novel semi-supervised deep pairwise hashing method to leverage both labeled and unlabeled data to learn hash functions. Our method utilizes the transduction of anchors to preserve the pairwise similarity relationship among both labeled and unlabeled samples. Additionally, to explore the semantic similarity information hidden in unlabeled data, it adopts self-ensembling to create strong ensemble targets for latent binary vectors of training samples and form a consensus predicting similarity relationship to multiple anchors. Unlike previous pairwise based hashing methods without maintaining the relevance among similar neighbors, we further explain and exhibit the capability of our method on preserving their relevance through calculating their similarities to anchors. Finally, extensive experiments on benchmark databases demonstrate the superior performance of the proposed method over recent state-of-the-art hashing methods on multiple retrieval tasks. The source codes of the proposed method are available on: \url{https://github.com/xsshi2015/Semi-supervised-Deep-Pairwise-Hashing}.Unified binary generative adversarial network for image retrieval and compressionhttps://www.zbmath.org/1483.683482022-05-16T20:40:13.078697Z"Song, Jingkuan"https://www.zbmath.org/authors/?q=ai:song.jingkuan"He, Tao"https://www.zbmath.org/authors/?q=ai:he.tao"Gao, Lianli"https://www.zbmath.org/authors/?q=ai:gao.lianli"Xu, Xing"https://www.zbmath.org/authors/?q=ai:xu.xing"Hanjalic, Alan"https://www.zbmath.org/authors/?q=ai:hanjalic.alan"Shen, Heng Tao"https://www.zbmath.org/authors/?q=ai:shen.heng-taoSummary: Binary codes have often been deployed to facilitate large-scale retrieval tasks, but not that often for image compression. In this paper, we propose a unified framework, BGAN+, that restricts the input noise variable of generative adversarial networks to be binary and conditioned on the features of each input image, and simultaneously learns two binary representations per image: one for image retrieval and the other serving as image compression. Compared to related methods that attempt to learn a single binary code serving both purposes, we demonstrate that choosing for two codes leads to more effective representations due to less concessions needed when balancing the requirements. The added value of using a unified framework compared to two separate frameworks lies in the synergy in data representation that is beneficial for both learning processes. When devising this framework, we also address another challenge in learning binary codes, namely that of learning supervision. While the most striking successes in image retrieval using binary codes have mostly involved discriminative models requiring labels, the proposed BGAN+ framework learns the binary codes in an unsupervised fashion, yet more effectively than the state-of-the-art supervised approaches. The proposed BGAN+ framework is evaluated on three benchmark datasets for image retrieval and two datasets on image compression. The experimental results show that BGAN+ outperforms the existing retrieval methods with significant margins and achieves promising performance for image compression, especially for low bit rates.A robust generative classifier against transfer attacks based on variational auto-encodershttps://www.zbmath.org/1483.683492022-05-16T20:40:13.078697Z"Zhang, Chen"https://www.zbmath.org/authors/?q=ai:zhang.chen"Tang, Zhuo"https://www.zbmath.org/authors/?q=ai:tang.zhuo"Zuo, Youfei"https://www.zbmath.org/authors/?q=ai:zuo.youfei"Li, Kenli"https://www.zbmath.org/authors/?q=ai:li.kenli"Li, Keqin"https://www.zbmath.org/authors/?q=ai:li.keqinSummary: Deep neural networks (DNNs) are vulnerable to adversarial examples. Even under black-box setting that is without access to the target model, transfer-based attacks can easily fool the DNNs. To alleviate this problem, we propose a robust classification model against transfer attacks based on the framework of variational Auto-Encoders (VAEs) which are probabilistic generative models and have been successfully used to a large mount of tasks. Specifically, our model simulates the data generative process with several multivariate Gaussian distributions and DNNs: (1) We assume that the latent embedding generated by an \textit{encoder} (a DNN) of each category corresponds to a multivariate Gaussian distribution. (2) A \textit{decoder} (a DNN) is proposed to decodes the latent embedding into an observable. (3) Theoretical analysis illustrates that our model can predict data's labels by maximizing the lower bound on the log-likelihood for each category utilizing Bayes' theorem with excellent robustness against transfer attacks. Inference in our model is done in a variational way so the Stochastic Gradient Variational Bayes (SGVB) estimator and reparamerization trick can be utilized to optimize the evidence lower bound (ELBO). The experiments with quantitative comparisons show that our approach reaches state-of-the-art with significantly better robustness.NetSDM: semantic data mining with network analysishttps://www.zbmath.org/1483.683502022-05-16T20:40:13.078697Z"Kralj, Jan"https://www.zbmath.org/authors/?q=ai:kralj.jan"Robnik-Sikonja, Marko"https://www.zbmath.org/authors/?q=ai:robnik-sikonja.marko"Lavrac, Nada"https://www.zbmath.org/authors/?q=ai:lavrac.nadaSummary: Semantic data mining (SDM) is a form of relational data mining that uses annotated data together with complex semantic background knowledge to learn rules that can be easily interpreted. The drawback of SDM is a high computational complexity of existing SDM algorithms, resulting in long run times even when applied to relatively small data sets. This paper proposes an effective SDM approach, named NetSDM, which first transforms the available semantic background knowledge into a network format, followed by network analysis based node ranking and pruning to significantly reduce the size of the original background knowledge. The experimental evaluation of the NetSDM methodology on acute lymphoblastic leukemia and breast cancer data demonstrates that NetSDM achieves radical time efficiency improvements and that learned rules are comparable or better than the rules obtained by the original SDM algorithms.A greedy feature selection algorithm for big data of high dimensionalityhttps://www.zbmath.org/1483.683512022-05-16T20:40:13.078697Z"Tsamardinos, Ioannis"https://www.zbmath.org/authors/?q=ai:tsamardinos.ioannis"Borboudakis, Giorgos"https://www.zbmath.org/authors/?q=ai:borboudakis.giorgos"Katsogridakis, Pavlos"https://www.zbmath.org/authors/?q=ai:katsogridakis.pavlos"Pratikakis, Polyvios"https://www.zbmath.org/authors/?q=ai:pratikakis.polyvios"Christophides, Vassilis"https://www.zbmath.org/authors/?q=ai:christophides.vassilisSummary: We present the Parallel, Forward-Backward with Pruning (PFBP) algorithm for feature selection (FS) for Big Data of high dimensionality. PFBP partitions the data matrix both in terms of rows as well as columns. By employing the concepts of \(p\)-values of conditional independence tests and meta-analysis techniques, PFBP relies only on computations local to a partition while minimizing communication costs, thus massively parallelizing computations. Similar techniques for combining local computations are also employed to create the final predictive model. PFBP employs asymptotically sound heuristics to make early, approximate decisions, such as Early Dropping of features from consideration in subsequent iterations, Early Stopping of consideration of features within the same iteration, or Early Return of the winner in each iteration. PFBP provides asymptotic guarantees of optimality for data distributions faithfully representable by a causal network (Bayesian network or maximal ancestral graph). Empirical analysis confirms a super-linear speedup of the algorithm with increasing sample size, linear scalability with respect to the number of features and processing cores. An extensive comparative evaluation also demonstrates the effectiveness of PFBP against other algorithms in its class. The heuristics presented are general and could potentially be employed to other greedy-type of FS algorithms. An application on simulated Single Nucleotide Polymorphism (SNP) data with 500K samples is provided as a use case.Handwritten mathematical expression recognition via paired adversarial learninghttps://www.zbmath.org/1483.683522022-05-16T20:40:13.078697Z"Wu, Jin-Wen"https://www.zbmath.org/authors/?q=ai:wu.jinwen"Yin, Fei"https://www.zbmath.org/authors/?q=ai:yin.fei"Zhang, Yan-Ming"https://www.zbmath.org/authors/?q=ai:zhang.yanming"Zhang, Xu-Yao"https://www.zbmath.org/authors/?q=ai:zhang.xu-yao"Liu, Cheng-Lin"https://www.zbmath.org/authors/?q=ai:liu.chenglinSummary: Recognition of handwritten mathematical expressions (MEs) is an important problem that has wide applications in practice. Handwritten ME recognition is challenging due to the variety of writing styles and ME formats. As a result, recognizers trained by optimizing the traditional supervision loss do not perform satisfactorily. To improve the robustness of the recognizer with respect to writing styles, in this work, we propose a novel paired adversarial learning method to learn semantic-invariant features. Specifically, our proposed model, named PAL-v2, consists of an attention-based recognizer and a discriminator. During training, handwritten MEs and their printed templates are fed into PAL-v2 simultaneously. The attention-based recognizer is trained to learn semantic-invariant features with the guide of the discriminator. Moreover, we adopt a convolutional decoder to alleviate the vanishing and exploding gradient problems of RNN-based decoder, and further, improve the coverage of decoding with a novel attention method. We conducted extensive experiments on the CROHME dataset to demonstrate the effectiveness of each part of the method and achieved state-of-the-art performance.On hierarchical task networkshttps://www.zbmath.org/1483.683532022-05-16T20:40:13.078697Z"Herzig, Andreas"https://www.zbmath.org/authors/?q=ai:herzig.andreas"Perrussel, Laurent"https://www.zbmath.org/authors/?q=ai:perrussel.laurent"Xiao, Zhanhao"https://www.zbmath.org/authors/?q=ai:xiao.zhanhaoSummary: In planning based on hierarchical task networks (HTN), plans are generated by refining high-level actions (``compound tasks'') into lower-level actions, until primitive actions are obtained that can be sent to execution. While a primitive action is defined by its precondition and effects, a high-level action is defined by zero, one or several methods: sets of (high-level or primitive) actions decomposing it together with a constraint. We give a semantics of HTNs in terms of dynamic logic with program inclusion. We propose postulates guaranteeing soundness and completeness of action refinement. We also show that hybrid planning can be analysed in the same dynamic logic framework.
For the entire collection see [Zbl 1350.68015].Refinement of intentionshttps://www.zbmath.org/1483.683542022-05-16T20:40:13.078697Z"Herzig, Andreas"https://www.zbmath.org/authors/?q=ai:herzig.andreas"Perrussel, Laurent"https://www.zbmath.org/authors/?q=ai:perrussel.laurent"Xiao, Zhanhao"https://www.zbmath.org/authors/?q=ai:xiao.zhanhao"Zhang, Dongmo"https://www.zbmath.org/authors/?q=ai:zhang.dongmoSummary: According to Bratman, future-directed intentions are high-level plans. We view such plans as high-level actions that can typically not be executed directly: they have to be progressively refined until executable basic actions are obtained. Higher- and lower-level actions are linked by the means-end relation, alias instrumentality relation. In this paper we extend Shoham's database perspective of Bratman's theory by the notions of refinement and instrumentality.
For the entire collection see [Zbl 1350.68015].A modified extended Kalman filter method for multi-layered neural network traininghttps://www.zbmath.org/1483.683552022-05-16T20:40:13.078697Z"Kim, Kyungsup"https://www.zbmath.org/authors/?q=ai:kim.kyungsup"Won, Yoojae"https://www.zbmath.org/authors/?q=ai:won.yoojaeSummary: This paper discusses extended Kalman filter method for solving learning problems of multilayered neural networks. A lot of learning algorithms for deep layered network are sincerely suffered from complex computation and slow convergence because of a very large number of free parameters. We consider an efficient learning algorithm for deep neural network. Extended Kalman filter method is applied to parameter estimation of neural network to improve convergence and computation complexity. We discuss how an efficient algorithm should be developed for neural network learning by using Extended Kalman filter.Polarity and variable selection heuristics for SAT-based anytime MaxSAThttps://www.zbmath.org/1483.683562022-05-16T20:40:13.078697Z"Nadel, Alexander"https://www.zbmath.org/authors/?q=ai:nadel.alexanderSummary: This paper is a system description of the anytime MaxSAT solver \texttt{TT-Open-WBO-Inc}, which won both of the weighted incomplete tracks of MaxSAT Evaluation 2019. We implemented the recently introduced polarity and variable selection heuristics, \texttt{TORC} and \texttt{TSB}, respectively, in the \texttt{Open-WBO-Inc-BMO} algorithm within the open-source anytime MaxSAT solver \texttt{Open-WBO-Inc}. As a result, the solver is substantially more efficient.DRAT proofs for XOR reasoninghttps://www.zbmath.org/1483.683572022-05-16T20:40:13.078697Z"Philipp, Tobias"https://www.zbmath.org/authors/?q=ai:philipp.tobias"Rebola-Pardo, Adrián"https://www.zbmath.org/authors/?q=ai:rebola-pardo.adrianSummary: Unsatisfiability proofs in the DRAT format became the de facto standard to increase the reliability of contemporary SAT solvers. We consider the problem of generating proofs for the XOR reasoning component in SAT solvers and propose two methods: direct translation transforms every XOR constraint addition inference into a DRAT proof, whereas T-translation avoids the exponential blow-up in direct translations by using fresh variables. T-translation produces DRAT proofs from Gaussian elimination records that are polynomial in the size of the input CNF formula. Experiments show that a combination of both approaches with a simple prediction method outperforms the BDD-based method.
For the entire collection see [Zbl 1350.68015].A schema theorem of evolution based on gene expression programminghttps://www.zbmath.org/1483.683582022-05-16T20:40:13.078697Z"Wang, Yue"https://www.zbmath.org/authors/?q=ai:wang.yue.6"Tang, Chang Jie"https://www.zbmath.org/authors/?q=ai:tang.changjie"Yang, Ning"https://www.zbmath.org/authors/?q=ai:yang.ning"Chen, Yu"https://www.zbmath.org/authors/?q=ai:chen.yu.7"Xu, Kai Kuo"https://www.zbmath.org/authors/?q=ai:xu.kaikuo(no abstract)Fast searching on complete \(k\)-partite graphshttps://www.zbmath.org/1483.683592022-05-16T20:40:13.078697Z"Xue, Yuan"https://www.zbmath.org/authors/?q=ai:xue.yuan"Yang, Boting"https://www.zbmath.org/authors/?q=ai:yang.boting"Zhong, Farong"https://www.zbmath.org/authors/?q=ai:zhong.farong"Zilles, Sandra"https://www.zbmath.org/authors/?q=ai:zilles.sandraSummary: Research on graph searching has recently gained interest in computer science, mathematics, and physics. This paper studies fast searching of a fugitive in a graph, a model that was introduced by \textit{D. Dyer} et al. in [Lect. Notes Comput. Sci. 5034, 143--154 (2008; Zbl 1143.68587)]. We provide lower bounds and upper bounds on the fast search number (i.e., the minimum number of searchers required for capturing the fugitive) of complete \(k\)-partite graphs. We also investigate some special classes of complete \(k\)-partite graphs, such as complete bipartite graphs and complete split graphs. We solve the open problem of determining the fast search number of complete bipartite graphs, and present upper and lower bounds on the fast search number of complete split graphs.
For the entire collection see [Zbl 1377.68004].Resolving distributed knowledgehttps://www.zbmath.org/1483.683602022-05-16T20:40:13.078697Z"Ågotnes, Thomas"https://www.zbmath.org/authors/?q=ai:agotnes.thomas"Wáng, Yì N."https://www.zbmath.org/authors/?q=ai:wang.yinan|wang.yi-nuo|wang.yinong|wang.yining|wang.yinu|wang.yinaoSummary: \textit{Distributed knowledge} is the sum of the knowledge in a group; what someone who is able to discern between two possible worlds whenever \textit{any} member of the group can discern between them, would know. Sometimes distributed knowledge is referred to as the potential knowledge of a group, or the joint knowledge they could obtain if they had unlimited means of communication. In epistemic logic, the formula \(D_G\varphi\) is intended to express the fact that group \(G\) has distributed knowledge of \(\varphi\), that there is enough information in the group to infer \(\varphi\). But this is not the same as reasoning about \textit{what happens if the members of the group share their information}. In this paper we introduce an operator \(R_G\), such that \(R_G\varphi\) means that \(\varphi\) is true after \(G\) have shared all their information with each other -- after \(G\)'s distributed knowledge has been \textit{resolved}. The \(R_G\) operators are called \textit{resolution operators}. Semantically, we say that an expression \(R_G\varphi\) is true iff \(\varphi\) is true in what \textit{J. van Benthem} [Logical dynamics of information and interaction. Cambridge: Cambridge University Press (2011; Zbl 1251.03003), p.~249] calls (\(G\)'s) \textit{communication core}; the model update obtained by removing links to states for members of \(G\) that are not linked by \textit{all} members of \(G\). We study logics with different combinations of resolution operators and operators for common and distributed knowledge. Of particular interest is the relationship between distributed and common knowledge. The main results are sound and complete axiomatizations.
For the entire collection see [Zbl 1446.68019].Epistemic protocols for distributed gossipinghttps://www.zbmath.org/1483.683612022-05-16T20:40:13.078697Z"Apt, Krzysztof R."https://www.zbmath.org/authors/?q=ai:apt.krzysztof-rafal"Grossi, Davide"https://www.zbmath.org/authors/?q=ai:grossi.davide"van der Hoek, Wiebe"https://www.zbmath.org/authors/?q=ai:van-der-hoek.wiebeSummary: Gossip protocols aim at arriving, by means of point-to-point or group communications, at a situation in which all the agents know each other's secrets. We consider distributed gossip protocols which are expressed by means of epistemic logic. We provide an operational semantics of such protocols and set up an appropriate framework to argue about their correctness. Then we analyze specific protocols for complete graphs and for directed rings.
For the entire collection see [Zbl 1446.68019].Common knowledge in a logic of gossipshttps://www.zbmath.org/1483.683622022-05-16T20:40:13.078697Z"Apt, Krzysztof R."https://www.zbmath.org/authors/?q=ai:apt.krzysztof-rafal"Wojtczak, Dominik"https://www.zbmath.org/authors/?q=ai:wojtczak.dominikSummary: Gossip protocols aim at arriving, by means of point-to-point or group communications, at a situation in which all the agents know each other secrets. Recently a number of authors studied distributed epistemic gossip protocols. These protocols use as guards formulas from a simple epistemic logic, which makes their analysis and verification substantially easier.
We study here common knowledge in the context of such a logic. First, we analyze when it can be reduced to iterated knowledge. Then we show that the semantics and truth for formulas without nested common knowledge operator are decidable. This implies that implementability, partial correctness and termination of distributed epistemic gossip protocols that use non-nested common knowledge operator is decidable, as well. Given that common knowledge is equivalent to an infinite conjunction of nested knowledge, these results are non-trivial generalizations of the corresponding decidability results for the original epistemic logic, established in
[the authors, Lect. Notes Comput. Sci. 10021, 18--33 (2016; Zbl 06658150)].
For the entire collection see [Zbl 1446.68014].Conversation and gameshttps://www.zbmath.org/1483.683632022-05-16T20:40:13.078697Z"Asher, Nicholas"https://www.zbmath.org/authors/?q=ai:asher.nicholas-m"Paul, Soumya"https://www.zbmath.org/authors/?q=ai:paul.soumyaSummary: In this paper we summarize concepts from earlier work and demonstrate how infinite sequential games can be used to model strategic conversations. Such a model allows one to reason about the structure and complexity of various kinds of winning goals that conversationalists might have. We show how to use tools from topology, set-theory and logic to express such goals. We then show how to tie down the notion of a winning condition to specific discourse moves using techniques from Mean Payoff games and discounting. We argue, however, that this still requires another addition from epistemic game theory to define appropriate solution and rationality underlying a conversation.
For the entire collection see [Zbl 1416.03005].Approximate unification in the description logic \(\mathcal {FL}_0\)https://www.zbmath.org/1483.683642022-05-16T20:40:13.078697Z"Baader, Franz"https://www.zbmath.org/authors/?q=ai:baader.franz"Marantidis, Pavlos"https://www.zbmath.org/authors/?q=ai:marantidis.pavlos"Okhotin, Alexander"https://www.zbmath.org/authors/?q=ai:okhotin.alexanderSummary: Unification in description logics (DLs) has been introduced as a novel inference service that can be used to detect redundancies in ontologies, by finding different concepts that may potentially stand for the same intuitive notion. It was first investigated in detail for the DL \(\mathcal {FL}_0\), where unification can be reduced to solving certain language equations. In order to increase the recall of this method for finding redundancies, we introduce and investigate the notion of approximate unification, which basically finds pairs of concepts that ``almost'' unify. The meaning of ``almost'' is formalized using distance measures between concepts. We show that approximate unification in \(\mathcal {FL}_0\) can be reduced to approximately solving language equations, and devise algorithms for solving the latter problem for two particular distance measures.
For the entire collection see [Zbl 1350.68015].Temporal here and therehttps://www.zbmath.org/1483.683652022-05-16T20:40:13.078697Z"Balbiani, Philippe"https://www.zbmath.org/authors/?q=ai:balbiani.philippe"Diéguez, Martín"https://www.zbmath.org/authors/?q=ai:dieguez.martinSummary: Temporal here and there (THT) constitutes the logical foundations of temporal equilibrium logic. Nevertheless, it has never been studied in detail since results about axiomatisation and interdefinability of modal operators remained unknown. In this paper we provide a sound and complete axiomatic system for THT together with several results on interdefinability of modal operators.
For the entire collection see [Zbl 1350.68015].On logics of group belief in structured coalitionshttps://www.zbmath.org/1483.683662022-05-16T20:40:13.078697Z"Balbiani, Philippe"https://www.zbmath.org/authors/?q=ai:balbiani.philippe"Pearce, David"https://www.zbmath.org/authors/?q=ai:pearce.david-j|pearce.david-g|pearce.david-a-j"Uridia, Levan"https://www.zbmath.org/authors/?q=ai:uridia.levanSummary: In the study of group belief formation, groups of agents are often assumed to possess a topological structure. Here we investigate some ways in which this topological structure may provide the semantical basis for logics of group belief. We impose a partial order on a set of agents first to be able to express preferences of agents by their doxastic abilities, secondly to express the idea of a coalition (well formed group) and thirdly to give a natural semantics for the group belief operator. We define the group belief of a set of agents in two different ways and study their corresponding logics. We also study a logic where doxastic preference is expressed by a binary operator. We prove completeness and discuss correspondences between the logics.
For the entire collection see [Zbl 1350.68015].On the solvability of inductive problems: a study in epistemic topologyhttps://www.zbmath.org/1483.683672022-05-16T20:40:13.078697Z"Baltag, Alexandru"https://www.zbmath.org/authors/?q=ai:baltag.alexandru"Gierasimczuk, Nina"https://www.zbmath.org/authors/?q=ai:gierasimczuk.nina"Smets, Sonja"https://www.zbmath.org/authors/?q=ai:smets.sonja-j-lSummary: We investigate the issues of inductive problem-solving and learning by doxastic agents. We provide topological characterizations of solvability and learnability, and we use them to prove that AGM-style belief revision is ``universal'', i.e., that every solvable problem is solvable by AGM conditioning.
For the entire collection see [Zbl 1446.68019].Formalizing goal serializability for evaluation of planning featureshttps://www.zbmath.org/1483.683682022-05-16T20:40:13.078697Z"Basseda, Reza"https://www.zbmath.org/authors/?q=ai:basseda.reza"Kifer, Michael"https://www.zbmath.org/authors/?q=ai:kifer.michaelSummary: Evaluation of the properties of various planning techniques such as completeness and termination plays an important role in choosing an appropriate planning technique for a particular planning problem. In this paper, we use the already existing formal specification of two well-known and classic state space planning techniques, forward state space planning and goal stack state space planning techniques, in transaction logic (\(\mathcal {TR}\)) to study their completeness. Our study shows that using \(\mathcal {TR}\), we can formally specify the serializability of planning problems and prove the completeness of \(\textit{STRIPS}\) planning problems for planning problems with serializable goals.
For the entire collection see [Zbl 1350.68015].A relaxation of internal conflict and defence in weighted argumentation frameworkshttps://www.zbmath.org/1483.683692022-05-16T20:40:13.078697Z"Bistarelli, Stefano"https://www.zbmath.org/authors/?q=ai:bistarelli.stefano"Rossi, Fabio"https://www.zbmath.org/authors/?q=ai:rossi.fabio"Santini, Francesco"https://www.zbmath.org/authors/?q=ai:santini.francescoSummary: In weighted abstract argumentation frameworks (WAAFs), weights on attacks bring more information. An advantage is the possibility to define a different notion of defence, which also checks if the weight associated with defence is compared with the weight of attacks. We study and merge together two different relaxations of classically crisp-concepts in WAAFs: one is related to a new notion of weighted defence (defence can be stronger or weaker at will), while the second one is related to how much inconsistency one is willing to tolerate inside an extension (which can be not totally conflict-free now). These two relaxations are strictly related and influence each other: allowing a small conflict may lead to have more arguments in an extension, and consequently result in a stronger or weaker defence. We model weights with a semiring structure, which can be instantiated to different metrics used in the literature (e.g., fuzzy WAAFs).
For the entire collection see [Zbl 1350.68015].From type spaces to probability frames and back, via languagehttps://www.zbmath.org/1483.683702022-05-16T20:40:13.078697Z"Bjorndahl, Adam"https://www.zbmath.org/authors/?q=ai:bjorndahl.adam"Halpern, Joseph Y."https://www.zbmath.org/authors/?q=ai:halpern.joseph-ySummary: We investigate the connection between the two major mathematical frameworks for modeling interactive beliefs: \textit{Harsanyi type spaces} and possible-worlds style \textit{probability frames}. While translating the former into the latter is straightforward, we demonstrate that the reverse translation relies implicitly on a background logical language. Once this ``language parameter'' is made explicit, it reveals a close relationship between \textit{universal} type spaces and \textit{canonical} models: namely, that they are essentially the same construct. As the nature of a canonical model depends heavily on the background logic used to generate it, this work suggests a new view into a corresponding landscape of universal type spaces.
For the entire collection see [Zbl 1446.68014].Decidability and expressivity of Ockhamist propositional dynamic logicshttps://www.zbmath.org/1483.683712022-05-16T20:40:13.078697Z"Boudou, Joseph"https://www.zbmath.org/authors/?q=ai:boudou.joseph"Lorini, Emiliano"https://www.zbmath.org/authors/?q=ai:lorini.emilianoSummary: Ockhamist propositional dynamic logic (\(\mathsf {OPDL}\)) is a logic unifying the family of dynamic logics and the family of branching-time temporal logics, two families of logic widely used in AI to model reactive systems and multi-agent systems (MAS). In this paper, we present two variants of this logic. These two logics share the same language and differ only in one semantic condition. The first logic embeds Bundled \(\textsf {CTL}^*\) while the second embeds \(\textsf {CTL}^*\). We provide a 2EXPTIME decision procedure for the satisfiability problem of each variant. The decision procedure for the first variant of \(\mathsf {OPDL}\) is based on the elimination of Hintikka sets while the decision procedure for the second variant relies on automata.
For the entire collection see [Zbl 1350.68015].On the expressiveness of temporal equilibrium logichttps://www.zbmath.org/1483.683722022-05-16T20:40:13.078697Z"Bozzelli, Laura"https://www.zbmath.org/authors/?q=ai:bozzelli.laura"Pearce, David"https://www.zbmath.org/authors/?q=ai:pearce.david-a-j|pearce.david-g|pearce.david-jSummary: We investigate expressiveness issues of temporal equilibrium logic (TEL), a promising nonmonotonic logical framework for temporal reasoning. TEL shares the syntax of standard linear temporal logic LTL, but its semantics is an orthogonal combination of the LTL semantics with the nonmonotonic semantics of equilibrium logic. We establish that TEL is more expressive than LTL, and captures a strict subclass of \(\omega\)-regular languages. We illustrate the expressive power of \(\textsf {TEL}\) by showing that \(\textsf {LTL}\)-conformant planning, which is not expressible in \(\textsf {LTL}\), can be instead expressed in \(\textsf {TEL}\). Additionally, we provide a systematic study of the expressiveness comparison between the LTL semantics and the TEL semantics for various natural syntactical fragments.
For the entire collection see [Zbl 1350.68015].Introducing role defeasibility in description logicshttps://www.zbmath.org/1483.683732022-05-16T20:40:13.078697Z"Britz, Katarina"https://www.zbmath.org/authors/?q=ai:britz.katarina"Varzinczak, Ivan"https://www.zbmath.org/authors/?q=ai:varzinczak.ivan-joseSummary: Accounts of preferential reasoning in description logics often take as point of departure the semantic notion of a preference order on objects in a domain of interpretation, which allows for the development of notions of defeasible subsumption and entailment. However, such an approach does not account for defeasible roles, interpreted as partially ordered sets of tuples. We state the case for role defeasibility and introduce a corresponding preferential semantics for a number of defeasible constructs on roles. We show that this does not negatively affect decidability or complexity of reasoning for an important class of DLs, and that existing notions of preferential reasoning can be expressed in terms of defeasible roles.
For the entire collection see [Zbl 1350.68015].Preservation of semantic properties during the aggregation of abstract argumentation frameworkshttps://www.zbmath.org/1483.683742022-05-16T20:40:13.078697Z"Chen, Weiwei"https://www.zbmath.org/authors/?q=ai:chen.weiwei"Endriss, Ulle"https://www.zbmath.org/authors/?q=ai:endriss.ulleSummary: An abstract argumentation framework can be used to model the argumentative stance of an agent at a high level of abstraction, by indicating for every pair of arguments that is being considered in a debate whether the first attacks the second. When modelling a group of agents engaged in a debate, we may wish to aggregate their individual argumentation frameworks to obtain a single such framework that reflects the consensus of the group. Even when agents disagree on many details, there may well be high-level agreement on important semantic properties, such as the acceptability of a given argument. Using techniques from social choice theory, we analyse under what circumstances such semantic properties agreed upon by the individual agents can be preserved under aggregation.
For the entire collection see [Zbl 1446.68014].Opposition frameworkshttps://www.zbmath.org/1483.683752022-05-16T20:40:13.078697Z"Croitoru, Cosmina"https://www.zbmath.org/authors/?q=ai:croitoru.cosmina"Mehlhorn, Kurt"https://www.zbmath.org/authors/?q=ai:mehlhorn.kurtSummary: In this paper we introduce opposition frameworks, a generalization of Dung's argumentation frameworks.
While keeping the attack relation as the sole type of interaction between nodes and the abstract level of argumentation frameworks, opposition networks add more flexibility, reducing the gap between structured and abstract argumentation. A guarded attack calculus is developed in order to obtain proper generalizations of Dung's admissibility-based semantics. The high modeling capabilities of our new setting offer an alternative instantiation solution (of other existing argumentation frameworks) for arguments evaluation.
For the entire collection see [Zbl 1350.68015].Reasoning about justified belief based on the fusion of evidencehttps://www.zbmath.org/1483.683762022-05-16T20:40:13.078697Z"Fan, Tuan-Fang"https://www.zbmath.org/authors/?q=ai:fan.tuan-fang"Liau, Churn-Jung"https://www.zbmath.org/authors/?q=ai:liau.churn-jungSummary: In this paper, we propose logics for reasoning about belief and evidence. Starting from justification logic (JL) in which the reasons why a fact is believed are explicitly represented as justification terms, we explore the relationship between justified belief and fused information from different evidential sources. We argue that the expressive power of JL is inadequate for our purpose, because, while a justification formula can represent that a piece of evidence is admissible for the belief, it cannot express whether the evidence has been actually observed. Therefore, to address the issue, we propose more fine-grained JL's that can express the informational content of evidence, and the actual observation of evidence is definable in such logics. As a byproduct, we also show that the proposed logics are easily extended to accommodate dynamic evidential reasoning. Consequently, we can integrate JL and dynamic epistemic logic (DEL) paradigms in a natural way.
For the entire collection see [Zbl 1350.68015].A formal approach to the problem of logical non-omnisciencehttps://www.zbmath.org/1483.683772022-05-16T20:40:13.078697Z"Garrabrant, Scott"https://www.zbmath.org/authors/?q=ai:garrabrant.scott-m"Benson-Tilsen, Tsvi"https://www.zbmath.org/authors/?q=ai:benson-tilsen.tsvi"Critch, Andrew"https://www.zbmath.org/authors/?q=ai:critch.andrew-james"Soares, Nate"https://www.zbmath.org/authors/?q=ai:soares.nate"Taylor, Jessica"https://www.zbmath.org/authors/?q=ai:taylor.jessicaSummary: We present the \textit{logical induction criterion} for computable algorithms that assign probabilities to every logical statement in a given formal language, and refine those probabilities over time. The criterion is motivated by a series of stock trading analogies. Roughly speaking, each logical sentence \(\phi\) is associated with a stock that is worth \$1 per share if \(\phi\) is true and nothing otherwise, and we interpret the belief-state of a logically uncertain reasoner as a set of market prices, where \(\mathbb{P}_n(\phi)=50\%\) means that on day \(n\), shares of \(\phi\) may be bought or sold from the reasoner for 50c. A market is then called a \textit{logical inductor} if (very roughly) there is no polynomial-time computable trading strategy with finite risk tolerance that earns unbounded profits in that market over time. We then describe how this single criterion implies a number of desirable properties of bounded reasoners; for example, logical inductors outpace their underlying deductive process, perform universal empirical induction given enough time to think, and place strong trust in their own reasoning process.
For the entire collection see [Zbl 1446.68014].Writing declarative specifications for clauseshttps://www.zbmath.org/1483.683782022-05-16T20:40:13.078697Z"Gebser, Martin"https://www.zbmath.org/authors/?q=ai:gebser.martin"Janhunen, Tomi"https://www.zbmath.org/authors/?q=ai:janhunen.tomi"Kaminski, Roland"https://www.zbmath.org/authors/?q=ai:kaminski.roland"Schaub, Torsten"https://www.zbmath.org/authors/?q=ai:schaub.torsten-h"Tasharrofi, Shahab"https://www.zbmath.org/authors/?q=ai:tasharrofi.shahabSummary: Modern satisfiability (SAT) solvers provide an efficient implementation of classical propositional logic. Their input language, however, is based on the conjunctive normal form (CNF) of propositional formulas. To use SAT solver technology in practice, a user must create the input clauses in one way or another. A typical approach is to write a procedural program that generates formulas on the basis of some input data relevant for the problem domain and translates them into CNF. In this paper, we propose a declarative approach where the intended clauses are specified in terms of rules in analogy to answer set programming (ASP). This allows the user to write first-order specifications for intended clauses in a schematic way by exploiting term variables. We develop a formal framework required to define the semantics of such specifications. Moreover, we provide an implementation harnessing state-of-the-art ASP grounders to accomplish the grounding step of clauses. As a result, we obtain a general-purpose clause-level grounding approach for SAT solvers. Finally, we illustrate the capabilities of our specification methodology in terms of combinatorial and application problems.
For the entire collection see [Zbl 1350.68015].Incremental computation of deterministic extensions for dynamic argumentation frameworkshttps://www.zbmath.org/1483.683792022-05-16T20:40:13.078697Z"Greco, Sergio"https://www.zbmath.org/authors/?q=ai:greco.sergio"Parisi, Francesco"https://www.zbmath.org/authors/?q=ai:parisi.francescoSummary: We address the problem of efficiently recomputing the extensions of abstract argumentation frameworks (AFs) which are updated by adding/deleting arguments or attacks. In particular, after identifying some properties that hold for updates of AFs under several well-known semantics, we focus on the two most popular `deterministic' semantics (namely, grounded and ideal) and present two algorithms for their incremental computation, well-suited to dynamic applications where updates to an initial AF are frequently performed to take into account new available knowledge. We experimentally validated the proposed approach.
For the entire collection see [Zbl 1350.68015].GenB: a general solver for AGM revisionhttps://www.zbmath.org/1483.683802022-05-16T20:40:13.078697Z"Hunter, Aaron"https://www.zbmath.org/authors/?q=ai:hunter.aaron"Tsang, Eric"https://www.zbmath.org/authors/?q=ai:tsang.eric-k-c|tsang.eric-c-cSummary: We describe a general tool for solving belief revision problems with a range of different operators. Our tool allows a user to flexibly specify a total pre-order over states, using simple selection boxes in a graphic user interface. In this manner, we are able to calculate the result of any AGM revision operator. The user is also able to specify so-called trust partitions to calculate the result of trust-sensitive revision. The overall goal is to provide users with a simple tool that can be used in applications involving AGM-style revision. While the tool can be demonstrated and tested as a standalone application with a fixed user interface, what we have actually developed is a set of libraries and functions that can flexibly be incorporprated in other systems. It is anticipated that this tool will be useful for experimentation, education, and prototyping to solve problems in formal reasoning.
For the entire collection see [Zbl 1350.68015].Efficient reasoning for inconsistent Horn formulaehttps://www.zbmath.org/1483.683812022-05-16T20:40:13.078697Z"Marques-Silva, Joao"https://www.zbmath.org/authors/?q=ai:marques-silva.joao-p"Ignatiev, Alexey"https://www.zbmath.org/authors/?q=ai:ignatyev.alexey-a"Mencía, Carlos"https://www.zbmath.org/authors/?q=ai:mencia.carlos"Peñaloza, Rafael"https://www.zbmath.org/authors/?q=ai:penaloza.rafaelSummary: Horn formulae are widely used in different settings that include logic programming, answer set programming, description logics, deductive databases, and system verification, among many others. One concrete example is concept subsumption in lightweight description logics, which can be reduced to inference in propositional Horn formulae. Some problems require one to reason with inconsistent Horn formulae. This is the case when providing minimal explanations of inconsistency. This paper proposes efficient algorithms for a number of decision, function and enumeration problems related with inconsistent Horn formulae. Concretely, the paper develops efficient algorithms for finding and enumerating minimal unsatisfiable subsets (MUSes), minimal correction subsets (MCSes), but also for computing the lean kernel. The paper also shows the practical importance of some of the proposed algorithms.
For the entire collection see [Zbl 1350.68015].Endogenizing epistemic actionshttps://www.zbmath.org/1483.683822022-05-16T20:40:13.078697Z"Nalls, Will"https://www.zbmath.org/authors/?q=ai:nalls.will"Bjorndahl, Adam"https://www.zbmath.org/authors/?q=ai:bjorndahl.adamSummary: Through a series of examples, we illustrate some important drawbacks that the \textit{action logic} framework suffers from in its ability to represent the dynamics of information updates. We argue that these problems stem from the fact that the \textit{action model}, a central construct designed to encode agents' uncertainty about actions, is itself effectively common knowledge amongst the agents. In response to these difficulties, we motivate and propose an alternative semantics that avoids them by (roughly speaking) endogenizing the action model. We discuss the relationship to action logic, and provide a sound and complete axiomatization.
For the entire collection see [Zbl 1446.68014].Pakota: a system for enforcement in abstract argumentationhttps://www.zbmath.org/1483.683832022-05-16T20:40:13.078697Z"Niskanen, Andreas"https://www.zbmath.org/authors/?q=ai:niskanen.andreas"Wallner, Johannes P."https://www.zbmath.org/authors/?q=ai:wallner.johannes-peter"Järvisalo, Matti"https://www.zbmath.org/authors/?q=ai:jarvisalo.mattiSummary: In this paper we describe Pakota, a system implementation that allows for solving enforcement problems over argumentation frameworks. Via harnessing Boolean satisfiability (SAT) and maximum satisfiability (MaxSAT) solvers, Pakota implements algorithms for extension and status enforcement under various central AF semantics, covering a range of NP-complete -- via direct MaxSAT encodings -- and \(\mathrm{\Sigma}_{2}^\mathrm{P}\)-complete -- via MaxSAT-based counterexample-guided abstraction refinement -- enforcement problems. We overview the algorithmic approaches implemented in Pakota, and describe in detail the system architecture, features, interfaces, and usage of the system. Furthermore, we present an empirical evaluation on the impact of the choice of MaxSAT solvers on the scalability of the system, and also provide benchmark generators for extension and status enforcement.
For the entire collection see [Zbl 1350.68015].Understanding the abstract dialectical frameworkhttps://www.zbmath.org/1483.683842022-05-16T20:40:13.078697Z"Polberg, Sylwia"https://www.zbmath.org/authors/?q=ai:polberg.sylwiaSummary: Among the most general structures extending the framework by Dung are the abstract dialectical frameworks (ADFs). They come equipped with various types of semantics, with the most prominent -- the labeling-based one -- being analyzed in the context of computational complexity, instantiations and software support. This makes the abstract dialectical frameworks valuable tools for argumentation. However, there are fewer results available concerning the relation between the ADFs and other argumentation frameworks. In this paper we would like to address this issue by introducing a number of translations from various formalisms into ADFs. The results of our study show the similarities and differences between them, thus promoting the use and understanding of ADFs. Moreover, our analysis also proves their capability to model many of the existing frameworks, including those that go beyond the attack relation. Finally, translations allow other structures to benefit from the research on ADFs in general and from the existing software in particular.
For the entire collection see [Zbl 1350.68015].A two-phase dialogue game for skeptical preferred semanticshttps://www.zbmath.org/1483.683852022-05-16T20:40:13.078697Z"Shams, Zohreh"https://www.zbmath.org/authors/?q=ai:shams.zohreh"Oren, Nir"https://www.zbmath.org/authors/?q=ai:oren.nirSummary: In this paper we propose a labelling based dialogue game for determining whether a single argument within a Dung argumentation framework is skeptically preferred. Our game consists of two phases, and determines the membership of a single argument within the extension, assuming optimal play by dialogue participants. In the first phase, one player attempts to advance arguments to construct an extension not containing the argument under consideration, while the second phase verifies that the extension is indeed a preferred one. Correctness within this basic game requires perfect play by both players, and we therefore also introduce an overarching game to overcome this limitation.
For the entire collection see [Zbl 1350.68015].Argument-based belief in topological structureshttps://www.zbmath.org/1483.683862022-05-16T20:40:13.078697Z"Shi, Chenwei"https://www.zbmath.org/authors/?q=ai:shi.chenwei"Smets, Sonja"https://www.zbmath.org/authors/?q=ai:smets.sonja-j-l"Velázquez-Quesada, Fernando R."https://www.zbmath.org/authors/?q=ai:velazquez-quesada.fernando-raymundoSummary: This paper combines two studies: a topological semantics for epistemic notions and abstract argumentation theory. In our combined setting, we use a topological semantics to represent the structure of an agent's collection of evidence, and we use argumentation theory to single out the relevant sets of evidence through which a notion of \textit{beliefs grounded on arguments} is defined. We discuss the formal properties of this newly defined notion, providing also a formal language with a matching modality together with a sound and complete axiom system for it. Despite the fact that our agent can combine her evidence in a `rational' way (captured via the topological structure), argument-based beliefs are not closed under conjunction. This illustrates the difference between an agent's reasoning abilities (i.e. the way she is able to combine her available evidence) and the closure properties of her beliefs. We use this point to argue for why the failure of closure under conjunction of belief should not bear the burden of the failure of rationality.
For the entire collection see [Zbl 1446.68014].Parameterized complexity results for a model of theory of mind based on dynamic epistemic logichttps://www.zbmath.org/1483.683872022-05-16T20:40:13.078697Z"van de Pol, Iris"https://www.zbmath.org/authors/?q=ai:van-de-pol.iris"van Rooij, Iris"https://www.zbmath.org/authors/?q=ai:van-rooij.iris"Szymanik, Jakub"https://www.zbmath.org/authors/?q=ai:szymanik.jakubSummary: In this paper we introduce a computational-level model of theory of mind (ToM) based on dynamic epistemic logic (DEL), and we analyze its computational complexity. The model is a special case of DEL model checking. We provide a parameterized complexity analysis, considering several aspects of DEL (e.g., number of agents, size of preconditions, etc.) as parameters. We show that model checking for DEL is PSPACE-hard, also when restricted to single-pointed models and S5 relations, thereby solving an open problem in the literature. Our approach is aimed at formalizing current intractability claims in the cognitive science literature regarding computational models of ToM.
For the entire collection see [Zbl 1446.68019].Announcement as effort on topological spaceshttps://www.zbmath.org/1483.683882022-05-16T20:40:13.078697Z"van Ditmarsch, Hans"https://www.zbmath.org/authors/?q=ai:van-ditmarsch.hans-pieter"Knight, Sophia"https://www.zbmath.org/authors/?q=ai:knight.sophia"Özgün, Aybüke"https://www.zbmath.org/authors/?q=ai:ozgun.aybukeSummary: We propose a multi-agent logic of knowledge, public and arbitrary announcements, that is interpreted on topological spaces in the style of subset space semantics. The arbitrary announcement modality functions similarly to the effort modality in subset space logics, however, it comes with intuitive and semantic differences. We provide axiomatizations for three logics based on this setting, and demonstrate their completeness.
For the entire collection see [Zbl 1446.68019].Conditional belief, knowledge and probabilityhttps://www.zbmath.org/1483.683892022-05-16T20:40:13.078697Z"van Eijck, Jan"https://www.zbmath.org/authors/?q=ai:van-eijck.jan"Li, Kai"https://www.zbmath.org/authors/?q=ai:li.kaiSummary: A natural way to represent beliefs and the process of updating beliefs is presented by Bayesian probability theory, where belief of an agent \(a\) in \(P\) can be interpreted as \(a\) considering that \(P\) is more probable than not \(P\). This paper attempts to get at the core logical notion underlying this.
The paper presents a sound and complete neighbourhood logic for conditional belief and knowledge, and traces the connections with probabilistic logics of belief and knowledge. The key notion in this paper is that of an agent \(a\) believing \(P\) conditionally on having information \(Q\), where it is assumed that \(Q\) is compatible with what \(a\) knows.
Conditional neighbourhood logic can be viewed as a core system for reasoning about subjective plausibility that is not yet committed to an interpretation in terms of numerical probability. Indeed, every weighted Kripke model gives rise to a conditional neighbourhood model, but not vice versa. We show that our calculus for conditional neighbourhood logic is sound but not complete for weighted Kripke models. Next, we show how to extend the calculus to get completeness for the class of weighted Kripke models.
Neighbourhood models for conditional belief are closed under model restriction (public announcement update), while earlier neighbourhood models for belief as `willingness to bet' were not. Therefore the logic we present improves on earlier neighbourhood logics for belief and knowledge. We present complete calculi for public announcement and for publicly revealing the truth value of propositions using reduction axioms. The reductions show that adding these announcement operators to the language does not increase expressive power.
For the entire collection see [Zbl 1446.68014].Correct grounded reasoning with presumptive argumentshttps://www.zbmath.org/1483.683902022-05-16T20:40:13.078697Z"Verheij, Bart"https://www.zbmath.org/authors/?q=ai:verheij.bartSummary: We address the semantics and normative questions for reasoning with presumptive arguments: how are presumptive arguments grounded in interpretations; and when are they evaluated as correct? For deductive and uncertain reasoning, classical logic and probability theory provide canonical answers to these questions. Staying formally close to these, we propose case models and their preferences as formal semantics for the interpretation of presumptive arguments. Arguments are evaluated as presumptively valid when they make a case that is maximally preferred. By qualitative and quantitative representation results, we show formal relations between deductive, uncertain and presumptive reasoning. In this way, the work is a step to the connection of logical and probabilistic approaches in AI.
For the entire collection see [Zbl 1350.68015].A dynamic epistemic framework for conformant planninghttps://www.zbmath.org/1483.683912022-05-16T20:40:13.078697Z"Yu, Quan"https://www.zbmath.org/authors/?q=ai:yu.quan"Li, Yanjun"https://www.zbmath.org/authors/?q=ai:li.yanjun"Wang, Yanjing"https://www.zbmath.org/authors/?q=ai:wang.yanjingSummary: In this paper, we introduce a lightweight dynamic epistemic logical framework for automated planning under initial uncertainty. We reduce plan verification and conformant planning to model checking problems of our logic. We show that the model checking problem of the iteration-free fragment is PSPACE-complete. By using two non-standard (but equivalent) semantics, we give novel model checking algorithms to the full language and the iteration-free language.
For the entire collection see [Zbl 1446.68019].Decision trees with at most 19 vertices for knowledge representationhttps://www.zbmath.org/1483.683922022-05-16T20:40:13.078697Z"Azad, Mohammad"https://www.zbmath.org/authors/?q=ai:azad.mohammadSummary: We study decision trees as a means of representation of knowledge. To this end, we design two techniques for the creation of CART (Classification and Regression Tree)-like decision trees that are based on bi-objective optimization algorithms. We investigate three parameters of the decision trees constructed by these techniques: number of vertices, global misclassification rate, and local misclassification rate.
For the entire collection see [Zbl 1475.68025].Inconsistency-tolerant query answering: rationality properties and computational complexity analysishttps://www.zbmath.org/1483.683932022-05-16T20:40:13.078697Z"Baget, Jean François"https://www.zbmath.org/authors/?q=ai:baget.jean-francois"Benferhat, Salem"https://www.zbmath.org/authors/?q=ai:benferhat.salem"Bouraoui, Zied"https://www.zbmath.org/authors/?q=ai:bouraoui.zied"Croitoru, Madalina"https://www.zbmath.org/authors/?q=ai:croitoru.madalina"Mugnier, Marie-Laure"https://www.zbmath.org/authors/?q=ai:mugnier.marie-laure"Papini, Odile"https://www.zbmath.org/authors/?q=ai:papini.odile"Rocher, Swan"https://www.zbmath.org/authors/?q=ai:rocher.swan"Tabia, Karim"https://www.zbmath.org/authors/?q=ai:tabia.karimSummary: Generalising the state of the art, an inconsistency-tolerant semantics can be seen as a couple composed of a modifier operator and an inference strategy. In this paper we deepen the analysis of such general setting and focus on two aspects. First, we investigate the rationality properties of such semantics for existential rule knowledge bases. Second, we unfold the broad landscape of complexity results of inconsistency-tolerant semantics under a specific (yet expressive) subclass of existential rules.
For the entire collection see [Zbl 1350.68015].Toward a Dempster-Shafer theory of conceptshttps://www.zbmath.org/1483.683942022-05-16T20:40:13.078697Z"Frittella, Sabine"https://www.zbmath.org/authors/?q=ai:frittella.sabine"Manoorkar, Krishna"https://www.zbmath.org/authors/?q=ai:manoorkar.krishna"Palmigiano, Alessandra"https://www.zbmath.org/authors/?q=ai:palmigiano.alessandra"Tzimoulis, Apostolos"https://www.zbmath.org/authors/?q=ai:tzimoulis.apostolos"Wijnberg, Nachoem"https://www.zbmath.org/authors/?q=ai:wijnberg.nachoem-mSummary: In this paper, we generalize the basic notions and results of Dempster-Shafer theory from predicates to formal concepts. Results include the representation of conceptual belief functions as inner measures of suitable probability functions, and a Dempster-Shafer rule of combination on belief functions on formal concepts.Revising possibilistic knowledge bases via compatibility degreeshttps://www.zbmath.org/1483.683952022-05-16T20:40:13.078697Z"Jin, Yifan"https://www.zbmath.org/authors/?q=ai:jin.yifan"Wang, Kewen"https://www.zbmath.org/authors/?q=ai:wang.kewen"Wang, Zhe"https://www.zbmath.org/authors/?q=ai:wang.zhe"Zhuang, Zhiqiang"https://www.zbmath.org/authors/?q=ai:zhuang.zhiqiangSummary: Possibilistic logic is a weighted logic for dealing with incomplete and uncertain information by assigning weights to propositional formulas. A possibilistic knowledge base (KB) is a finite set of such formulas. The problem of revising a possibilistic KB by possibilistic formula is not new. However, existing approaches are limited in two ways. Firstly, they suffer from the so-called drowning effect. Secondly, they handle certain and uncertain formulas separately and most only handle certain inputs. In this paper, we propose a unified approach that caters for revision by both certain and uncertain inputs and relieves the drowning effect. The approach is based on a refined inconsistency degree function called compatibility degree which provides a unifying framework (called cd-revision) for defining specific revision operators for possibilistic KBs. Our definition leads to an algorithm for computing the result of the proposed revision. The revision operators defined in cd-revision possess some desirable properties including those from classic belief revision and some others that are specific to possibilistic revision. We also show that several major revision operators for possibilistic, stratified and prioritised KBs can be embedded in cd-revision.
For the entire collection see [Zbl 1350.68015].A method of multiple ontology mapping based on refence ontologyhttps://www.zbmath.org/1483.683962022-05-16T20:40:13.078697Z"Zhang, Ling Yu"https://www.zbmath.org/authors/?q=ai:zhang.lingyu"Jiang, Ting Ci"https://www.zbmath.org/authors/?q=ai:jiang.ting-ci"Chen, Shu Xin"https://www.zbmath.org/authors/?q=ai:chen.shuxin(no abstract)Group recommendations: axioms, impossibilities, and random walkshttps://www.zbmath.org/1483.683972022-05-16T20:40:13.078697Z"Lev, Omer"https://www.zbmath.org/authors/?q=ai:lev.omer"Tennenholtz, Moshe"https://www.zbmath.org/authors/?q=ai:tennenholtz.mosheSummary: We introduce an axiomatic approach to group recommendations, in line of previous work on the axiomatic treatment of trust-based recommendation systems, ranking systems, and other foundational work on the axiomatic approach to internet mechanisms in social choice settings. In group recommendations we wish to recommend to a group of agents, consisting of both opinionated and undecided members, a joint choice that would be acceptable to them. Such a system has many applications, such as choosing a movie or a restaurant to go to with a group of friends, recommending games for online game players, \& other communal activities.
Our method utilizes a given social graph to extract information on the undecided, relying on the agents influencing them. We first show that a set of fairly natural desired requirements (a.k.a axioms) leads to an impossibility, rendering mutual satisfaction of them unreachable. However, we also show a modified set of axioms that fully axiomatize a group variant of the random-walk recommendation system, expanding a previous result from the individual recommendation case.
For the entire collection see [Zbl 1446.68014].Providing reliability in recommender systems through Bernoulli matrix factorizationhttps://www.zbmath.org/1483.683982022-05-16T20:40:13.078697Z"Ortega, Fernando"https://www.zbmath.org/authors/?q=ai:ortega.fernando"Lara-Cabrera, Raúl"https://www.zbmath.org/authors/?q=ai:lara-cabrera.raul"González-Prieto, Ángel"https://www.zbmath.org/authors/?q=ai:gonzalez-prieto.angel"Bobadilla, Jesús"https://www.zbmath.org/authors/?q=ai:bobadilla.jesusSummary: Beyond accuracy, quality measures are gaining importance in modern recommender systems, with reliability being one of the most important indicators in the context of collaborative filtering. This paper proposes Bernoulli Matrix Factorization (BeMF), which is a matrix factorization model, to provide both prediction values and reliability values. BeMF is a very innovative approach from several perspectives: a) it acts on model-based collaborative filtering rather than on memory-based filtering, b) it does not use external methods or extended architectures, such as existing solutions, to provide reliability, c) it is based on a classification-based model instead of traditional regression-based models, and d) matrix factorization formalism is supported by the Bernoulli distribution to exploit the binary nature of the designed classification model. The experimental results show that the more reliable a prediction is, the less liable it is to be wrong: recommendation quality improves after the most reliable predictions are selected. State-of-the-art quality measures for reliability have been tested, which shows that BeMF outperforms previous baseline methods and models.Sequences of refinements of rough sets: logical and algebraic aspectshttps://www.zbmath.org/1483.683992022-05-16T20:40:13.078697Z"Boffa, Stefania"https://www.zbmath.org/authors/?q=ai:boffa.stefania"Gerla, Brunella"https://www.zbmath.org/authors/?q=ai:gerla.brunellaSummary: In this thesis, a generalization of the classical \textit{Rough set theory}
[\textit{Z. Pawlak}, Int. J. Comput. Inform. Sci. 11, 341--356 (1982; Zbl 0501.68053)]
is developed considering the so-called \textit{sequences of orthopairs} that
we define in
[the authors, ``Kleene algebras as sequences of orthopairs'', in: Proceedings of the conference of the European Society for Fuzzy Logic and Technology, EUSFLAT'17, and the international workshop in intuitionistic fuzzy sets and generelized nets, IWIFSGN'17. Cham: Springer. (2017; \url{doi:10.1007/978-3-319-66830-7_22})]
as special sequences of rough sets.
Mainly, our aim is to introduce some \textit{operations between sequences of orthopairs}, and to discover how to generate them starting from the operations concerning standard rough sets (defined in
[\textit{D. Ciucci}, Fundam. Inform. 108, No. 3--4, 287--304 (2011; Zbl 1242.68309)]).
Also, we prove several \textit{representation theorems} representing the class of \textit{finite centered Kleene algebras with the interpolation property}
[\textit{R. Cignoli}, Algebra Univers. 23, 262--292 (1986; Zbl 0621.06009)],
and some classes of \textit{finite residuated lattices} (more precisely, we consider \textit{Nelson algebras}
[\textit{H. Rasiowa}, Fundam. Math. 46, 61--80 (1958; Zbl 0087.00905)],
\textit{Nelson lattices}
[\textit{M. Busaniche} and \textit{R. Cignoli}, J. Log. Comput. 20, No. 4, 761--793 (2010; Zbl 1205.03040)],
\textit{IUML-algebras}
[\textit{G. Metcalfe} and \textit{F. Montagna}, J. Symb. Log. 72, No. 3, 834--864 (2007; Zbl 1139.03017)]
and \textit{Kleene lattice with implication}
[\textit{J. L. Castiglioni} et al., Algebra Univers. 77, No. 4, 375--393 (2017; Zbl 1400.06004)])
as sequences of orthopairs.
Moreover, as an application, we show that a sequence of orthopairs can be used to represent \textit{an examiner's opinion on a number of candidates applying for a job}, and we show that opinions of two or more examiners can be combined using operations between sequences of orthopairs in order to get a final decision on each candidate.
Finally, we provide the original \textit{modal logic} \(SO_n\) with semantics based on sequences of orthopairs, and we employ it to describe the knowledge of an agent that increases over time, as new information is provided. Modal logic \(SO_n\) is characterized by the sequences \((\square_1, \ldots , \square_n)\) and \((\bigcirc_1, \ldots , \bigcirc_n)\) of \(n\) modal operators corresponding to a sequence \((t_1, \ldots , t_n)\) of consecutive times. Furthermore, the operator \(\square_i\) of \((\square_1, \ldots , \square_n)\) represents the knowledge of an agent at time \(t_i\), and it coincides with the \textit{necessity modal operator} of S5 logic
[\textit{B. F. Chellas}, Modal logic. An introduction. Cambridge etc.: Cambridge University Press (1980; Zbl 0431.03009)].
On the other hand, the main innovative aspect of modal logic \(SO_n\) is the presence of the sequence \((\bigcirc_1, \ldots , \bigcirc_n)\), since \(\bigcirc_i\) establishes whether an agent is \textit{interested in knowing} a given fact at time \(t_i\).
For the entire collection see [Zbl 1475.68025].Incremental attribute reduction based on a simplified discernibility matrixhttps://www.zbmath.org/1483.684002022-05-16T20:40:13.078697Z"Ge, Hao"https://www.zbmath.org/authors/?q=ai:ge.hao"Li, Long Shu"https://www.zbmath.org/authors/?q=ai:li.longshu"Yang, Chuan Jian"https://www.zbmath.org/authors/?q=ai:yang.chuanjian(no abstract)Discernibility matrices based on credibility and attribute reduction methodhttps://www.zbmath.org/1483.684012022-05-16T20:40:13.078697Z"Ge, Hao"https://www.zbmath.org/authors/?q=ai:ge.hao"Li, Long Shu"https://www.zbmath.org/authors/?q=ai:li.longshu"Yang, Chuan Jian"https://www.zbmath.org/authors/?q=ai:yang.chuanjian(no abstract)Updating algorithms of core attribute based on modifying datahttps://www.zbmath.org/1483.684022022-05-16T20:40:13.078697Z"Ge, Hao"https://www.zbmath.org/authors/?q=ai:ge.hao"Li, Long Shu"https://www.zbmath.org/authors/?q=ai:li.longshu"Yang, Chuan Jian"https://www.zbmath.org/authors/?q=ai:yang.chuanjian"Wang, Bo"https://www.zbmath.org/authors/?q=ai:wang.bo.1|wang.bo.2(no abstract)The stratic defuzzifier for discretised general type-2 fuzzy setshttps://www.zbmath.org/1483.684032022-05-16T20:40:13.078697Z"Greenfield, Sarah"https://www.zbmath.org/authors/?q=ai:greenfield.sarah"Chiclana, Francisco"https://www.zbmath.org/authors/?q=ai:chiclana.franciscoSummary: Stratification is a feature of the type-reduced set of the general type-2 fuzzy set, from which a new technique for general type-2 defuzzification, Stratic Defuzzification, may be derived. Existing defuzzification strategies are summarised. The stratified structure is described, after which the Stratic Defuzzifier is presented and contrasted experimentally for accuracy and efficiency with both the Exhaustive Method of Defuzzification (to benchmark accuracy) and the \(\alpha \)-Planes/Karnik-Mendel Iterative Procedure strategy, employing 5, 11, 21, 51 and 101 \(\alpha\)-planes. The Stratic Defuzzifier is shown to be much faster than the Exhaustive Defuzzifier. In fact the Stratic Defuzzifier and the \(\alpha\)-Planes/Karnik-Mendel Iterative Procedure Method are comparably speedy; the speed of execution correlates with the number of planes participating in the defuzzification process. The accuracy of the Stratic Defuzzifier is shown to be excellent. It is demonstrated to be more accurate than the \(\alpha \)-Planes/Karnik-Mendel Iterative Procedure Method in four of six test cases, regardless of the number of \(\alpha\)-planes employed. In one test case, it is less accurate than the \(\alpha\)-Planes/Karnik-Mendel Iterative Procedure Method, regardless of the number of \(\alpha\)-planes employed. In the remaining test case, the \(\alpha\)-Planes/Karnik-Mendel Iterative Procedure Method with 11 \(\alpha \)-Planes gives the most accurate result, with the Stratic Defuzzifier coming second.On the approximation of a membership function by empirical quantile functionshttps://www.zbmath.org/1483.684042022-05-16T20:40:13.078697Z"Guerra, Maria Letizia"https://www.zbmath.org/authors/?q=ai:guerra.maria-letizia"Sorini, Laerte"https://www.zbmath.org/authors/?q=ai:sorini.laerte"Stefanini, Luciano"https://www.zbmath.org/authors/?q=ai:stefanini.lucianoSummary: The Average Cumulative representation of fuzzy intervals is connected with the possibility theory in the sense that the possibility and necessity functions are substituted by a pair of non decreasing functions defined as the positive and negative variations in the Jordan decomposition of a membership function. In this paper we motivate the crucial role of ACF in determining the membership function from experimental data; some examples and simulations are shown to state the robustness of the proposed construction.A study of algebras and logics of rough sets based on classical and generalized approximation spaceshttps://www.zbmath.org/1483.684052022-05-16T20:40:13.078697Z"Kumar, Arun"https://www.zbmath.org/authors/?q=ai:kumar.arun-m|kumar.arun-nSummary: The seminal work of
\textit{Z. Pawlak} [Int. J. Comput. Inform. Sci. 11, 341--356 (1982; Zbl 0501.68053)]
on rough set theory has attracted the attention of researchers from various disciplines. Algebraists introduced some new algebraic structures and represented some old existing algebraic structures in terms of algebras formed by rough sets. In Logic, the rough set theory serves the models of several logics. This paper is an amalgamation of algebras and logics of rough set theory. We prove a structural theorem for Kleene algebras, showing that an element of a Kleene algebra can be looked upon as a rough set in some appropriate approximation space. The proposed propositional logic \(\mathcal{L}_K\) of Kleene algebras is sound and complete with respect to a 3-valued and a rough set semantics.
This article also investigates some negation operators in classical rough set theory, using Dunn's approach. We investigate the semantics of the Stone negation in perp frames, that of dual Stone negation in exhaustive frames, and that of Stone and dual Stone negations with the regularity property in \(K_-\) frames. The study leads to new semantics for the logics corresponding to the classes of Stone algebras, dual Stone algebras, and regular double Stone algebras. As the perp semantics provides a Kripke type semantics for logics with negations, exploiting this feature, we obtain duality results for several classes of algebras and corresponding frames.
In another part of this article, we propose a granule-based generalization of rough set theory. We obtain representations of distributive lattices (with operators) and Heyting algebras (with operators). Moreover, various negations appear from this generalized rough set theory and achieved new positions in Dunn's Kite of negations.
For the entire collection see [Zbl 1475.68025].Multiattribute decision making based on interval-valued intuitionistic fuzzy values, score function of connection numbers, and the set pair analysis theoryhttps://www.zbmath.org/1483.684062022-05-16T20:40:13.078697Z"Kumar, Kamal"https://www.zbmath.org/authors/?q=ai:kumar.kamal"Chen, Shyi-Ming"https://www.zbmath.org/authors/?q=ai:chen.shyimingSummary: This paper proposes a new multiattribute decision making (MADM) method based on the proposed score function of connection numbers (CNs) and the set pair analysis (SPA) theory in the interval-valued intuitionist fuzzy (IVIF) context. Firstly, we develop a score function for ranking CNs. The various notable characteristics of the proposed score function of CNs are also presented. Then, we propose a new MADM method based on interval-valued intuitionist fuzzy values (IVIFVs), the proposed score function of CNs and the SPA theory, where we convert IVIFVs into CNs and the optimal weights of attributes are calculated from the IVIF weights of attributes. Finally, the proposed MADM method is applied for MADM in the IVIF context, where the preference orders (POs) of the alternatives obtained by the proposed MADM method are compared with the ones obtained by the existing MADM methods. The proposed MADM method can overcome the drawbacks of the existing MADM methods.Similarity-based rough sets and its applications in data mininghttps://www.zbmath.org/1483.684072022-05-16T20:40:13.078697Z"Nagy, Dávid"https://www.zbmath.org/authors/?q=ai:nagy.davidSummary: Pawlakian spaces rely on an equivalence relation which represent indiscernibility. As a generalization of these spaces, some approximation spaces have appeared that are not based on an equivalence relation but on a tolerance relation that represents similarity. These spaces preserve the property of the Pawlakian space that the union of the base sets gives out the universe. However, they give up the requirement that the base sets are pairwise disjoint. The base sets are generated in a way where for each object, the objects that are similar to the given object, are taken. This means that the similarity to a given object is considered. In the worst case, it can happen that the number of base sets equals those of objects in the universe. This significantly increases the computational resource need of the set approximation process and limits the efficient use of them in large databases. To overcome this problem, a possible solution is presented in this dissertation. The space is called similarity-based rough sets where the system of base sets is generated by the correlation clustering. Therefore, the real similarity is taken into consideration not the similarity to a distinguished object. The space generated this way, on the one hand, represents the interpreted similarity properly and on the other hand, reduces the number of base sets to a manageable size. This work deals with the properties and applicability of this space, presenting all the advantages that can be gained from the correlation clustering.
For the entire collection see [Zbl 1475.68025].A tool for probabilistic reasoning based on logic programming and first-order theories under stable model semanticshttps://www.zbmath.org/1483.684082022-05-16T20:40:13.078697Z"Nickles, Matthias"https://www.zbmath.org/authors/?q=ai:nickles.matthiasSummary: This system description paper describes the software framework PrASP (``probabilistic answer set programming''). PrASP is both an uncertainty reasoning and machine learning software and a probabilistic logic programming language based on answer set programming (ASP). Besides serving as a research software platform for non-monotonic (inductive) probabilistic logic programming, our framework mainly targets applications in the area of uncertainty stream reasoning. PrASP programs can consist of ASP (AnsProlog) as well as first-order logic formulas (with stable model semantics), annotated with conditional or unconditional probabilities or probability intervals. A number of alternative inference algorithms allow to attune the system to different task characteristics (e.g., whether or not independence assumptions can be made).
For the entire collection see [Zbl 1350.68015].Analogical inequalitieshttps://www.zbmath.org/1483.684092022-05-16T20:40:13.078697Z"Prade, Henri"https://www.zbmath.org/authors/?q=ai:prade.henri-m"Richard, Gilles"https://www.zbmath.org/authors/?q=ai:richard.gillesSummary: Analogical proportions, i.e., statements of the form \(a\) is to \(b\) as \(c\) is to \(d\), state that the way \(a\) and \(b\) possibly differ is the same as \(c\) and \(d\) differ. Thus, it expresses an equality (between differences). However expressing inequalities may be also of interest for stating, for instance, that the difference between \(a\) and \(b\) is smaller than the one between \(c\) and \(d\). The logical modeling of analogical proportions, both in the Boolean case and in the multiple-valued case, has been developed in the last past years. This short paper provides a preliminary investigation of the logical modeling of so-called ``analogical inequalities'', which are introduced here, in relation with analogical proportions.
For the entire collection see [Zbl 1367.68004].Boolean analogical proportions -- axiomatics and algorithmic complexity issueshttps://www.zbmath.org/1483.684102022-05-16T20:40:13.078697Z"Prade, Henri"https://www.zbmath.org/authors/?q=ai:prade.henri-m"Richard, Gilles"https://www.zbmath.org/authors/?q=ai:richard.gillesSummary: Analogical proportions, i.e., statements of the form \(a\) is to \(b\) as \(c\) is to \(d\), are supposed to obey 3 axioms expressing reflexivity, symmetry, and stability under central permutation. These axioms are not enough to determine a single Boolean model, if a minimality condition is not added. After an algebraic discussion of this minimal model and of related expressions, another justification of this model is given in terms of Kolmogorov complexity. It is shown that the 6 Boolean patterns that make an analogical proportion true have a minimal complexity with respect to an expression reflecting the intended meaning of the proportion.
For the entire collection see [Zbl 1367.68004].Three-way multi-attribute decision making under hesitant fuzzy environmentshttps://www.zbmath.org/1483.684112022-05-16T20:40:13.078697Z"Wang, Jiajia"https://www.zbmath.org/authors/?q=ai:wang.jiajia"Ma, Xueling"https://www.zbmath.org/authors/?q=ai:ma.xueling"Xu, Zeshui"https://www.zbmath.org/authors/?q=ai:xu.zeshui"Zhan, Jianming"https://www.zbmath.org/authors/?q=ai:zhan.jianmingSummary: In decision making processes, an expert with hesitant attitudes may experience difficulties when evaluating alternatives via a single assessment value. By allowing the membership degree of an element to a set represented by several possible values, hesitant fuzzy sets (HFSs) are usually needed to address this situation. Thus, it is meaningful to put forward a kind of multi-attribute decision making (MADM) method in the hesitant fuzzy (HF) environment. In addition, three-way decision (3WD) is a decision making method by introducing the idea of non-commitments, and it can effectively reduce decision risks. In this paper, the 3WD method-based MADM with HF information is proposed. According to the information table, the membership function of an objective HFS is given. The effectiveness of the proposed method is verified by solving an infectious disease diagnosis problem. Finally, we give a comparative analysis between the proposed method and the existing HF MADM methods. Further, three experimental results show that the proposed method is reasonable and effective.A ranking model of Z-mixture-numbers based on the ideal degree and its application in multi-attribute decision makinghttps://www.zbmath.org/1483.684122022-05-16T20:40:13.078697Z"Xian, Sidong"https://www.zbmath.org/authors/?q=ai:xian.sidong"Chai, Jiahui"https://www.zbmath.org/authors/?q=ai:chai.jiahui"Li, Tangjin"https://www.zbmath.org/authors/?q=ai:li.tangjin"Huang, Jie"https://www.zbmath.org/authors/?q=ai:huang.jie.2|huang.jie.1Summary: The Z-number has a great advantage in describing uncertain information. Since Zadeh proposed Z-number, scholars have combined the Z-number with multi-attribute decision making (MADM). However, the problem of having both continuous and discrete attributes in practical MADM is rarely mentioned in the existing methods. To solve this problem, first, we propose the concept of Z-mixture-numbers and propose a new ranking model based on the idea of the ideal degree. Next, after analyzing the correlation coefficient between attributes, we propose the Z-multi-attribute decision making weighting method based on the correlation coefficient. Moreover, the Z mixture induced ordered weighted averaging (ZMIOWA) operator and the Z mixture combined weighted averaging aggregation operator (ZMCWAA) are put forward to solve the MADM problems in which continuous and discrete attributes exist simultaneously. Finally, we propose a MADM method with Z-mixture-numbers and take an example of the sharing car venture capital problem to illustrate its feasibility. It overcomes the previously unsolved problem of Z-numbers' continuous-discrete mixed MADM.Three-way decision models based on multigranulation support intuitionistic fuzzy rough setshttps://www.zbmath.org/1483.684132022-05-16T20:40:13.078697Z"Xue, Zhan'ao"https://www.zbmath.org/authors/?q=ai:xue.zhanao"Zhao, Liping"https://www.zbmath.org/authors/?q=ai:zhao.liping"Sun, Lin"https://www.zbmath.org/authors/?q=ai:sun.lin"Zhang, Min"https://www.zbmath.org/authors/?q=ai:zhang.min.7|zhang.min.5|zhang.min.1|zhang.min.4|zhang.min|zhang.min.6|zhang.min.2|zhang.min.3"Xue, Tianyu"https://www.zbmath.org/authors/?q=ai:xue.tianyuSummary: To capture the influence of various uncertain factors during delayed decision-making, support intuitionistic fuzzy sets (SIFSs) are introduced for three-way decisions (TWDs) to study this topic from the perspective of multigranulation. First, the concepts of support intuitionistic fuzzy rough sets are defined, and their related properties are discussed. Then, we combine support intuitionistic fuzzy rough sets with multigranulation rough sets (MRSs), present optimistic/pessimistic multigranulation support intuitionistic fuzzy rough set models, and discuss their corresponding properties. Second, a parameter \(\alpha\) is introduced to constrain the disjunction and conjunction of multiple support intuitionistic fuzzy relations, and variable optimistic and pessimistic multigranulation support intuitionistic fuzzy rough set models are constructed. Third, we define the similarity measure, positive ideal solution, negative ideal solution, and conditional probability based on multigranulation support intuitionistic fuzzy rough sets. Four kinds of TWD models based on four proposed multigranulation support intuitionistic fuzzy rough set models are established. Finally, decision rules can be obtained from a new score function and accuracy function, and the decision rule extraction algorithm based on multigranulation support intuitionistic fuzzy rough sets is designed. Experimental results on a series of examples demonstrate the effectiveness of our proposed TWD models.Attribute reduction based on logical operation of Boolean discernibility matriceshttps://www.zbmath.org/1483.684142022-05-16T20:40:13.078697Z"Yang, Chuan Jian"https://www.zbmath.org/authors/?q=ai:yang.chuanjian"Ge, Hao"https://www.zbmath.org/authors/?q=ai:ge.hao"Li, Long Shu"https://www.zbmath.org/authors/?q=ai:li.longshu(no abstract)Incremental fuzzy probability decision-theoretic approaches to dynamic three-way approximationshttps://www.zbmath.org/1483.684152022-05-16T20:40:13.078697Z"Yang, Xin"https://www.zbmath.org/authors/?q=ai:yang.xin"Liu, Dun"https://www.zbmath.org/authors/?q=ai:liu.dun"Yang, Xibei"https://www.zbmath.org/authors/?q=ai:yang.xibei"Liu, Keyu"https://www.zbmath.org/authors/?q=ai:liu.keyu"Li, Tianrui"https://www.zbmath.org/authors/?q=ai:li.tianruiSummary: As a special model of three-way decision, three-way approximations in the fuzzy probability space can be interpreted, represented, and implemented as dividing the universe into three pair-wise disjoint regions, i.e., the positive, negative and boundary regions, which are transformed from the fuzzy membership grades with respect to the fuzzy concept. To consider the temporality and uncertainty of data simultaneously, this paper focuses on the integration of dynamics and fuzziness in the context of three-way approximations. We analyze and investigate three types of fuzzy conditional probability functions based on the fuzzy \(T\)-norm operators. Besides, we introduce the matrix-based fuzzy probability decision-theoretic models to dynamic three-way approximations based on the principle of least cost. Subsequently, to solve the time-consuming computational problem, we design the incremental algorithms by the updating strategies of matrices when the attributes evolve over time. Finally, a series of comparative experiments is reported to demonstrate and verify the performance of proposed models.Data-driven inference modeling based on an on-line Wang-Mendel fuzzy approachhttps://www.zbmath.org/1483.684162022-05-16T20:40:13.078697Z"Zhai, Yanwei"https://www.zbmath.org/authors/?q=ai:zhai.yanwei"Lv, Zheng"https://www.zbmath.org/authors/?q=ai:lv.zheng"Zhao, Jun"https://www.zbmath.org/authors/?q=ai:zhao.jun.2"Wang, Wei"https://www.zbmath.org/authors/?q=ai:wang.wei.30"Leung, Henry"https://www.zbmath.org/authors/?q=ai:leung.henry-c-mSummary: To address the modeling of continuous production process with dynamic and nonlinear characteristics, an on-line Wang-Mendel fuzzy inference model is proposed in this paper, which extracts the fuzzy rules from the raw data without prior knowledge. Considering the state of the industrial system changes dynamically online, typical data samples with support degrees are used to describe the sample distribution characteristics in each fuzzy region. With respect to online dynamic learning process, a self-evolutionary strategy with adaptive memory factors is proposed, and the fuzzy rule structures are updated gradually by a stochastic gradient descent method. For on-line updating, a loss function is constructed by considering the inference errors and the previous rule structure, in order to achieve continuous learning without forgetting the knowledge learned before. In the reasoning process, a sparse fuzzy reasoning approach is designed for extrapolating the knowledge of the fuzzy regions without sample data. To verify the effectiveness of the proposed method, chaotic time series data with noises and industrial practical data coming from a steel plant are employed for experimental analyses. The experimental results show that, the proposed method is capable of describing a variety of dynamic features and exhibiting high accuracy for the industrial data.A three-value abstraction technique for the verification of epistemic properties in multi-agent systemshttps://www.zbmath.org/1483.684172022-05-16T20:40:13.078697Z"Belardinelli, Francesco"https://www.zbmath.org/authors/?q=ai:belardinelli.francesco"Lomuscio, Alessio"https://www.zbmath.org/authors/?q=ai:lomuscio.alessioSummary: We put forward an abstraction technique, based on a three-value semantics, for the verification of epistemic properties of agents participating in a multi-agent system. First we introduce a three-value interpretation of epistemic logic, based on a notion of order defined on the information content of the local states of each agent. Then, we use the three-value semantics to introduce an abstraction technique to verify epistemic properties of agents in infinite-state multi-agent systems.
For the entire collection see [Zbl 1350.68015].Delay and cooperation in nonstochastic banditshttps://www.zbmath.org/1483.684182022-05-16T20:40:13.078697Z"Cesa-Bianchi, Nicolò"https://www.zbmath.org/authors/?q=ai:cesa-bianchi.nicolo"Gentile, Claudio"https://www.zbmath.org/authors/?q=ai:gentile.claudio"Mansour, Yishay"https://www.zbmath.org/authors/?q=ai:mansour.yishaySummary: We study networks of communicating learning agents that cooperate to solve a common nonstochastic bandit problem. Agents use an underlying communication network to get messages about actions selected by other agents, and drop messages that took more than \(d\) hops to arrive, where \(d\) is a delay parameter. We introduce Exp3-Coop, a cooperative version of the Exp3 algorithm and prove that with \(K\) actions and \(N\) agents the average per-agent regret after \(T\) rounds is at most of order \(\sqrt{\left(d+1 + \tfrac{K}{N}\alpha_{\le d}\right)(T\ln K)}\), where \(\alpha_{\le d}\) is the independence number of the \(d\)-th power of the communication graph \(G\). We then show that for any connected graph, for \(d=\sqrt{K}\) the regret bound is \(K^{1/4}\sqrt{T}\), strictly better than the minimax regret \(\sqrt{KT}\) for noncooperating agents. More informed choices of \(d\) lead to bounds which are arbitrarily close to the full information minimax regret \(\sqrt{T\ln K}\) when \(G\) is dense.
When \(G\) has sparse components, we show
that a variant of Exp3-Coop, allowing agents to choose their parameters according to their centrality in \(G\), strictly improves the regret. Finally, as a by-product of our analysis, we provide the first characterization of the minimax regret for bandit learning with delay.Human-agent decision-making: combining theory and practicehttps://www.zbmath.org/1483.684192022-05-16T20:40:13.078697Z"Kraus, Sarit"https://www.zbmath.org/authors/?q=ai:kraus.saritSummary: Extensive work has been conducted both in game theory and logic to model strategic interaction. An important question is whether we can use these theories to design agents for interacting with people? On the one hand, they provide a formal design specification for agent strategies. On the other hand, people do not necessarily adhere to playing in accordance with these strategies, and their behavior is affected by a multitude of social and psychological factors. In this paper we will consider the question of whether strategies implied by theories of strategic behavior can be used by automated agents that interact proficiently with people. We will focus on automated agents that we built that need to interact with people in two negotiation settings: bargaining and deliberation. For bargaining we will study game-theory based equilibrium agents and for argumentation we will discuss logic-based argumentation theory. We will also consider security games and persuasion games and will discuss the benefits of using equilibrium based agents.
For the entire collection see [Zbl 1446.68019].An axiomatic approach to routinghttps://www.zbmath.org/1483.684202022-05-16T20:40:13.078697Z"Lev, Omer"https://www.zbmath.org/authors/?q=ai:lev.omer"Tennenholtz, Moshe"https://www.zbmath.org/authors/?q=ai:tennenholtz.moshe"Zohar, Aviv"https://www.zbmath.org/authors/?q=ai:zohar.avivSummary: Information delivery in a network of agents is a key issue for large, complex systems that need to do so in a predictable, efficient manner. The delivery of information in such multi-agent systems is typically implemented through routing protocols that determine how information flows through the network. Different routing protocols exist each with its own benefits, but it is generally unclear which properties can be successfully combined within a given algorithm. We approach this problem from the axiomatic point of view, i.e., we try to establish what are the properties we would seek to see in such a system, and examine the different properties which uniquely define common routing algorithms used today.
We examine several desirable properties, such as robustness, which ensures adding nodes and edges does not change the routing in a radical, unpredictable ways; and properties that depend on the operating environment, such as an ``economic model'', where nodes choose their paths based on the cost they are charged to pass information to the next node. We proceed to fully characterize minimal spanning tree, shortest path, and weakest link routing algorithms, showing a tight set of axioms for each.
For the entire collection see [Zbl 1446.68019].Preference at first sighthttps://www.zbmath.org/1483.684212022-05-16T20:40:13.078697Z"Liu, Chanjuan"https://www.zbmath.org/authors/?q=ai:liu.chanjuanSummary: We consider decision-making and game scenarios in which an agent is limited by his/her computational ability to foresee all the available moves towards the future -- that is, we study scenarios with \textit{short sight}. We focus on how short sight affects the logical properties of decision making in multi-agent settings. We start with \textit{single-agent sequential decision making} (SSDM) processes, modeling them by a new structure of '\textit{preference-sight trees}'. Using this model, we first explore the relation between a new natural solution concept of \textit{Sight-Compatible Backward Induction} (SCBI) and the histories produced by classical Backward Induction (BI). In particular, we find necessary and sufficient conditions for the two analyses to be equivalent. Next, we study whether larger sight always contributes to better outcomes. Then we develop a simple logical special-purpose language to formally express some key properties of our preference-sight models. Lastly, we show how short-sight SSDM scenarios call for substantial enrichments of existing fixed-point logics that have been developed for the classical BI solution concept. We also discuss changes in earlier modal logics expressing 'surface reasoning' about best actions in the presence of short sight. Our analysis may point the way to logical and computational analysis of more realistic game models.
For the entire collection see [Zbl 1446.68019].Normalizing flow policies for multi-agent systemshttps://www.zbmath.org/1483.684222022-05-16T20:40:13.078697Z"Ma, Xiaobai"https://www.zbmath.org/authors/?q=ai:ma.xiaobai"Gupta, Jayesh K."https://www.zbmath.org/authors/?q=ai:gupta.jayesh-k"Kochenderfer, Mykel J."https://www.zbmath.org/authors/?q=ai:kochenderfer.mykel-jSummary: Stochastic policy gradient methods using neural representations have had considerable success in single-agent domains with continuous action spaces. These methods typically use networks that output the parameters of a diagonal Gaussian distribution from which the resulting action is sampled. In multi-agent contexts, however, better policies may require complex multimodal action distributions. Based on recent progress in density modeling, we propose an alternative for policy representation in the form of conditional normalizing flows. This approach allows for greater flexibility in action distribution representation beyond mixture models. We demonstrate their advantage over standard methods on a set of tasks including human behavior modeling and reinforcement learning in multi-agent settings.
For the entire collection see [Zbl 1475.68029].Relating knowledge and coordinated action: the knowledge of preconditions principlehttps://www.zbmath.org/1483.684232022-05-16T20:40:13.078697Z"Moses, Yoram"https://www.zbmath.org/authors/?q=ai:moses.yoramSummary: The Knowledge of Preconditions principle (KoP) is proposed as a widely applicable connection between knowledge and action in multi-agent systems. Roughly speaking, it asserts that if some condition \(\varphi\) is a necessary condition for performing a given action \(\alpha\), then \textit{knowing} \(\varphi\) that this condition holds is also a necessary condition for performing \(\alpha\). Since the specifications of tasks often involve necessary conditions for actions, the KoP principle shows that such specifications induce knowledge preconditions for the actions. Distributed protocols or multi-agent plans that satisfy the specifications must ensure that this knowledge be attained, and that it is detected by the agents as a condition for action. The knowledge of preconditions principle is formalised in the runs and systems framework, and is proven to hold in a wide class of settings. Well-known connections between knowledge and coordinated action are extended and shown to derive directly from the KoP principle: a \textit{common knowledge of preconditions} principle is established showing that common knowledge is a necessary condition for performing simultaneous actions, and a \textit{nested knowledge of preconditions} principle is proven, showing that coordinating actions to be performed in linear temporal order requires a corresponding form of nested knowledge.
For the entire collection see [Zbl 1446.68019].Reactive policies with planning for action languageshttps://www.zbmath.org/1483.684242022-05-16T20:40:13.078697Z"Saribatur, Zeynep G."https://www.zbmath.org/authors/?q=ai:saribatur.zeynep-gozen"Eiter, Thomas"https://www.zbmath.org/authors/?q=ai:eiter.thomasSummary: Action languages are an important family of formalisms to represent action domains in a declarative manner and to reason about them. For this reason, the behavior of an agent in an environment may be governed by policies which take such action domain descriptions into account. In this paper, we describe a formal semantics for describing policies that express a reactive behavior for an agent, and connect our framework with the representation power of action languages. In this framework, we mitigate the large state spaces by employing the notion of indistinguishability, and combine components that are efficient for describing reactivity such as target establishment and (online) planning. Our representation allows one to analyze the flow of executing the given reactive policy, and lays foundations for verifying properties of policies. Additionally, the flexibility of the representation opens a range of possibilities for designing behaviors.
For the entire collection see [Zbl 1350.68015].A numerical framework for elastic surface matching, comparison, and interpolationhttps://www.zbmath.org/1483.684252022-05-16T20:40:13.078697Z"Bauer, Martin"https://www.zbmath.org/authors/?q=ai:bauer.martin"Charon, Nicolas"https://www.zbmath.org/authors/?q=ai:charon.nicolas"Harms, Philipp"https://www.zbmath.org/authors/?q=ai:harms.philipp"Hsieh, Hsi-Wei"https://www.zbmath.org/authors/?q=ai:hsieh.hsi-weiSummary: Surface comparison and matching is a challenging problem in computer vision. While elastic Riemannian metrics provide meaningful shape distances and point correspondences via the geodesic boundary value problem, solving this problem numerically tends to be difficult. Square root normal fields considerably simplify the computation of certain distances between parametrized surfaces. Yet they leave open the issue of finding optimal reparametrizations, which induce corresponding distances between unparametrized surfaces. This issue has concentrated much effort in recent years and led to the development of several numerical frameworks. In this paper, we take an alternative approach which bypasses the direct estimation of reparametrizations: we relax the geodesic boundary constraint using an auxiliary parametrization-blind varifold fidelity metric. This reformulation has several notable benefits. By avoiding altogether the need for reparametrizations, it provides the flexibility to deal with simplicial meshes of arbitrary topologies and sampling patterns. Moreover, the problem lends itself to a coarse-to-fine multi-resolution implementation, which makes the algorithm scalable to large meshes. Furthermore, this approach extends readily to higher-order feature maps such as square root curvature fields and is also able to include surface textures in the matching problem. We demonstrate these advantages on several examples, synthetic and real.The isowarp: the template-based visual geometry of isometric surfaceshttps://www.zbmath.org/1483.684262022-05-16T20:40:13.078697Z"Casillas-Perez, David"https://www.zbmath.org/authors/?q=ai:casillas-perez.david"Pizarro, Daniel"https://www.zbmath.org/authors/?q=ai:pizarro.daniel"Fuentes-Jimenez, David"https://www.zbmath.org/authors/?q=ai:fuentes-jimenez.david"Mazo, Manuel"https://www.zbmath.org/authors/?q=ai:mazo.manuel-jun"Bartoli, Adrien"https://www.zbmath.org/authors/?q=ai:bartoli.adrienSummary: Registration maps or \textit{warps} form a key element in Shape-from-Template (SfT). They relate the template with the input image, which contains the projection of the deformed surface. Recently, it was shown that isometric SfT can be solved analytically if the warp and its first-order derivatives are known. In practice, the warp is recovered by interpolating a set of discrete template-to-image point correspondences. This process relies on smoothness priors but ignores the 3D geometry. This may produce errors in the warp and poor reconstructions. In contrast, we propose to create a 3D consistent warp, which technically is a very challenging task, as the 3D shape variables must be eliminated from the isometric SfT equations to find differential constraints for the warp only. Integrating these constraints in warp estimation yields the \textit{isowarp}, a warp 3D consistent with isometric SfT. Experimental results show that incorporating the isowarp in the SfT pipeline allows the analytic solution to outperform non-convex 3D shape refinement methods and the recent DNN-based SfT methods. The isowarp can be properly initialized with convex methods and its hyperparameters can be automatically obtained with cross-validation. The isowarp is resistant to 3D ambiguities and less computationally expensive than existing 3D shape refinement methods. The isowarp is thus a theoretical and practical breakthrough in SfT.Recursive context routing for object detectionhttps://www.zbmath.org/1483.684272022-05-16T20:40:13.078697Z"Chen, Zhe"https://www.zbmath.org/authors/?q=ai:chen.zhe"Zhang, Jing"https://www.zbmath.org/authors/?q=ai:zhang.jing.9|zhang.jing.12|zhang.jing.5|zhang.jing.1|zhang.jing.2|zhang.jing.11|zhang.jing.7|zhang.jing.3|zhang.jing.6|zhang.jing|zhang.jing.8|zhang.jing.10"Tao, Dacheng"https://www.zbmath.org/authors/?q=ai:tao.dachengSummary: Recent studies have confirmed that modeling contexts is important for object detection. However, current context modeling approaches still have limited expressive capacity and dynamics to encode contextual relationships and model contexts, deteriorating their effectiveness. In this paper, we instead seek to recast the current context modeling framework and perform more dynamic context modeling for object detection. In particular, we devise a novel Recursive Context Routing (ReCoR) mechanism to encode contextual relationships and model contexts more effectively. The ReCoR progressively models more contexts through a recursive structure, providing a more feasible and more comprehensive method to utilize complicated contexts and contextual relationships. For each recursive stage, we further decompose the modeling of contexts and contextual relationships into a spatial modeling process and a channel-wise modeling process, avoiding the need for exhaustive modeling of all the potential pair-wise contextual relationships with more dynamics in a single pass. The spatial modeling process focuses on spatial contexts and gradually involves more spatial contexts according to the recursive architecture. In the channel-wise modeling process, we introduce a context routing algorithm to improve the efficacy of modeling channel-wise contextual relationships dynamically. We perform a comprehensive evaluation of the proposed ReCoR on the popular MS COCO dataset and PASCAL VOC dataset. The effectiveness of the ReCoR can be validated on both datasets according to the consistent performance gains of applying our method on different baseline object detectors. For example, on MS COCO dataset, our approach can respectively deliver around 10\% relative improvements for a Mask RCNN detector on the bounding box task, and 7\% relative improvements on the instance segmentation task, surpassing existing context modeling approaches with a great margin. State-of-the-art detection performance can also be accessed by applying the ReCoR on the Cascade Mask RCNN detector, illustrating the great benefits of our method for improving context modeling and object detection.Semantically tied paired cycle consistency for any-shot sketch-based image retrievalhttps://www.zbmath.org/1483.684282022-05-16T20:40:13.078697Z"Dutta, Anjan"https://www.zbmath.org/authors/?q=ai:dutta.anjan"Akata, Zeynep"https://www.zbmath.org/authors/?q=ai:akata.zeynepSummary: Low-shot sketch-based image retrieval is an emerging task in computer vision, allowing to retrieve natural images relevant to hand-drawn sketch queries that are rarely seen during the training phase. Related prior works either require aligned sketch-image pairs that are costly to obtain or inefficient memory fusion layer for mapping the visual information to a semantic space. In this paper, we address any-shot, \textit{i.e.} zero-shot and few-shot, sketch-based image retrieval (SBIR) tasks, where we introduce the few-shot setting for SBIR. For solving these tasks, we propose a semantically aligned paired cycle-consistent generative adversarial network (SEM-PCYC) for any-shot SBIR, where each branch of the generative adversarial network maps the visual information from sketch and image to a common semantic space via adversarial training. Each of these branches maintains cycle consistency that only requires supervision at the category level, and avoids the need of aligned sketch-image pairs. A classification criteria on the generators' outputs ensures the visual to semantic space mapping to be class-specific. Furthermore, we propose to combine textual and hierarchical side information via an auto-encoder that selects discriminating side information within a same end-to-end model. Our results demonstrate a significant boost in any-shot SBIR performance over the state-of-the-art on the extended version of the challenging Sketchy, TU-Berlin and QuickDraw datasets.Exploiting contextual knowledge for hybrid classification of visual objectshttps://www.zbmath.org/1483.684292022-05-16T20:40:13.078697Z"Eiter, Thomas"https://www.zbmath.org/authors/?q=ai:eiter.thomas"Kaminski, Tobias"https://www.zbmath.org/authors/?q=ai:kaminski.tobiasSummary: We consider the problem of classifying visual objects in a scene by exploiting the semantic context. For this task, we define hybrid classifiers (HC) that combine local classifiers with context constraints, and can be applied to collective classification problems (CCPs) in general. Context constraints are represented by weighted ASP constraints using object relations. To integrate probabilistic information provided by the classifier and the context, we embed our encoding in the formalism \(\mathit{LP}^\mathit{MLN}\), and show that an optimal labeling can be efficiently obtained from the corresponding \(\mathit{LP}^\mathit{MLN}\) program by employing an ordinary ASP solver. Moreover, we describe a methodology for constructing an HC for a CCP, and present experimental results of applying an HC for object classification in indoor and outdoor scenes, which exhibit significant improvements in terms of accuracy compared to using only a local classifier.
For the entire collection see [Zbl 1350.68015].Inferring 3D shapes from image collections using adversarial networkshttps://www.zbmath.org/1483.684302022-05-16T20:40:13.078697Z"Gadelha, Matheus"https://www.zbmath.org/authors/?q=ai:gadelha.matheus"Rai, Aartika"https://www.zbmath.org/authors/?q=ai:rai.aartika"Maji, Subhransu"https://www.zbmath.org/authors/?q=ai:maji.subhransu"Wang, Rui"https://www.zbmath.org/authors/?q=ai:wang.rui.1|wang.rui|wang.rui.2Summary: We investigate the problem of learning a probabilistic distribution over three-dimensional shapes given two-dimensional views of multiple objects taken from unknown viewpoints. Our approach called \textit{projective generative adversarial network} (PrGAN) trains a deep generative model of 3D shapes whose projections (or renderings) matches the distribution of the provided 2D views. The addition of a \textit{differentiable projection module} allows us to infer the underlying 3D shape distribution without access to any explicit 3D or viewpoint annotation during the learning phase. We show that our approach produces 3D shapes of comparable quality to GANs trained directly on 3D data. Experiments also show that the disentangled representation of 2D shapes into geometry and viewpoint leads to a good generative model of 2D shapes. The key advantage of our model is that it estimates 3D shape, viewpoint, and generates novel views from an input image in a completely unsupervised manner. We further investigate how the generative models can be improved if additional information such as depth, viewpoint or part segmentations is available at training time. To this end, we present new differentiable projection operators that can be used to learn better 3D generative models. Our experiments show that PrGAN can successfully leverage extra visual cues to create more diverse and accurate shapes.A decomposable Winograd method for N-D convolution acceleration in video analysishttps://www.zbmath.org/1483.684312022-05-16T20:40:13.078697Z"Huang, Di"https://www.zbmath.org/authors/?q=ai:huang.di"Zhang, Rui"https://www.zbmath.org/authors/?q=ai:zhang.rui|zhang.rui.3|zhang.rui.4|zhang.rui.1|zhang.rui.5|zhang.rui.2"Zhang, Xishan"https://www.zbmath.org/authors/?q=ai:zhang.xishan"Wu, Fan"https://www.zbmath.org/authors/?q=ai:wu.fan"Wang, Xianzhuo"https://www.zbmath.org/authors/?q=ai:wang.xianzhuo"Jin, Pengwei"https://www.zbmath.org/authors/?q=ai:jin.pengwei"Liu, Shaoli"https://www.zbmath.org/authors/?q=ai:liu.shaoli"Li, Ling"https://www.zbmath.org/authors/?q=ai:li.ling"Chen, Yunji"https://www.zbmath.org/authors/?q=ai:chen.yunjiSummary: Winograd's minimal filtering algorithm has been widely used in 2-D Convolutional Neural Networks (CNNs) to reduce the number of multiplications for faster processing. However, it is only effective on convolutions with kernel size as \(3\) and stride as 1, because it suffers from significantly increased FLOPs and numerical accuracy problems for kernel size larger than \(3\) and fails on convolution with stride larger than 1. Worse, the extension to N-D convolution will intensify the numerical accuracy problem. These problems severely obstruct Winograd's minimal filtering algorithm's application to video analysis. In this paper, we propose a novel Decomposable Winograd Method (DWM) for the N-D convolution acceleration, which breaks through the limitation of original Winograd's minimal filtering algorithm to more general convolutions. DWM decomposes kernels with large size or stride>1 to several small kernels with stride as 1 for further applying Winograd algorithm, so that DWM can reduce the number of multiplications while keeping the numerical accuracy. It enables the fast exploration of larger kernel size, larger stride value, and higher dimensions in CNNs for high performance and accuracy and even the potential for new CNNs. Comparing against the original Winograd algorithm, the proposed DWM is able to support all kinds of N-D convolutions with a speedup of \(1.44\times -3.38\times\), without affecting the numerical accuracy.Parallel single-pixel imaging: a general method for direct-global separation and 3D shape reconstruction under strong global illuminationhttps://www.zbmath.org/1483.684322022-05-16T20:40:13.078697Z"Jiang, Hongzhi"https://www.zbmath.org/authors/?q=ai:jiang.hongzhi"Li, Yuxi"https://www.zbmath.org/authors/?q=ai:li.yuxi"Zhao, Huijie"https://www.zbmath.org/authors/?q=ai:zhao.huijie"Li, Xudong"https://www.zbmath.org/authors/?q=ai:li.xudong"Xu, Yang"https://www.zbmath.org/authors/?q=ai:xu.yang|xu.yang.2|xu.yang.1|xu.yang.3Summary: We present parallel single-pixel imaging (PSI), a photography technique that captures light transport coefficients and enables the separation of direct and global illumination, to achieve 3D shape reconstruction under strong global illumination. PSI is achieved by extending single-pixel imaging (SI) to modern digital cameras. Each pixel on an imaging sensor is considered an independent unit that can obtain an image using the SI technique. The obtained images characterize the light transport behavior between pixels on the projector and the camera. However, the required number of SI illumination patterns generally becomes unacceptably large in practical situations. We introduce local region extension (LRE) method to accelerate the data acquisition of PSI. LRE perceives that the visible region of each camera pixel accounts for a local region. Thus, the number of detected unknowns is determined by local region area, which is extremely beneficial in terms of data acquisition efficiency. PSI possesses several properties and advantages. For instance, PSI captures the complete light transport coefficients between the projector-camera pair, without making specific assumptions on measured objects and without requiring special hardware and restrictions on the arrangement of the projector-camera pair. The perfect reconstruction property of LRE can be proven mathematically. The acquisition and reconstruction stages are straightforward and easy to implement in the existing projector-camera systems. These properties and advantages make PSI a general and sound theoretical model to decompose direct and global illuminations and perform 3D shape reconstruction under global illumination.Compositional convolutional neural networks: a robust and interpretable model for object recognition under occlusionhttps://www.zbmath.org/1483.684332022-05-16T20:40:13.078697Z"Kortylewski, Adam"https://www.zbmath.org/authors/?q=ai:kortylewski.adam"Liu, Qing"https://www.zbmath.org/authors/?q=ai:liu.qing.2|liu.qing|liu.qing.1"Wang, Angtian"https://www.zbmath.org/authors/?q=ai:wang.angtian"Sun, Yihong"https://www.zbmath.org/authors/?q=ai:sun.yihong"Yuille, Alan"https://www.zbmath.org/authors/?q=ai:yuille.alan-lSummary: Computer vision systems in real-world applications need to be robust to partial occlusion while also being explainable. In this work, we show that black-box deep convolutional neural networks (DCNNs) have only limited robustness to partial occlusion. We overcome these limitations by unifying DCNNs with part-based models into Compositional Convolutional Neural Networks (CompositionalNets) -- an interpretable deep architecture with innate robustness to partial occlusion. Specifically, we propose to replace the fully connected classification head of DCNNs with a differentiable compositional model that can be trained end-to-end. The structure of the compositional model enables CompositionalNets to decompose images into objects and context, as well as to further decompose object representations in terms of individual parts and the objects' pose. The generative nature of our compositional model enables it to localize occluders and to recognize objects based on their non-occluded parts. We conduct extensive experiments in terms of image classification and object detection on images of artificially occluded objects from the PASCAL3D+ and ImageNet dataset, and real images of partially occluded vehicles from the MS-COCO dataset. Our experiments show that CompositionalNets made from several popular DCNN backbones (VGG-16, ResNet50, ResNext) improve by a large margin over their non-compositional counterparts at classifying and detecting partially occluded objects. Furthermore, they can localize occluders accurately despite being trained with class-level supervision only. Finally, we demonstrate that CompositionalNets provide human interpretable predictions as their individual components can be understood as detecting parts and estimating an objects' viewpoint.Solving rolling shutter 3D vision problems using analogies with non-rigidityhttps://www.zbmath.org/1483.684342022-05-16T20:40:13.078697Z"Lao, Yizhen"https://www.zbmath.org/authors/?q=ai:lao.yizhen"Ait-Aider, Omar"https://www.zbmath.org/authors/?q=ai:ait-aider.omar"Bartoli, Adrien"https://www.zbmath.org/authors/?q=ai:bartoli.adrienSummary: We propose an original approach to absolute pose and structure-from-motion (SfM) which handles rolling shutter (RS) effects. Unlike most existing methods which either augment global shutter projection with velocity parameters or impose continuous time and motion through pose interpolation, we use local differential constraints. These are established by drawing analogies with non-rigid 3D vision techniques, namely shape-from-template and non-rigid SfM (NRSfM). The proposed idea is to interpret the images of a rigid surface acquired by a moving RS camera as those of a virtually deformed surface taken by a GS camera. These virtually deformed surfaces are first recovered by relaxing the RS constraint using SfT or NRSfM. Then we upgrade the virtually deformed surface to the actual rigid structure and compute the camera pose and ego-motion by reintroducing the RS constraint. This uses a new 3D-3D registration procedure that minimizes a cost function based on the Euclidean 3D point distance. This is more stable and physically meaningful than the reprojection error or the algebraic distance used in previous work. Experimental results obtained with synthetic and real data show that the proposed methods outperform existing ones in terms of accuracy and stability, even in the known critical configurations.Multilayer joint segmentation using MRF and graph cutshttps://www.zbmath.org/1483.684352022-05-16T20:40:13.078697Z"Lermé, Nicolas"https://www.zbmath.org/authors/?q=ai:lerme.nicolas"Le Hégarat-Mascle, Sylvie"https://www.zbmath.org/authors/?q=ai:le-hegarat-mascle.sylvie"Malgouyres, François"https://www.zbmath.org/authors/?q=ai:malgouyres.francois"Lachaize, Marie"https://www.zbmath.org/authors/?q=ai:lachaize.marieSummary: The problem of jointly segmenting objects, according to a set of labels (of cardinality \(L)\), from a set of images (of cardinality \(K)\) to produce \(K\) individual segmentations plus one joint segmentation, can be cast as a Markov random field model. Coupling terms in the considered energy function enforce the consistency between the individual segmentations and the joint segmentation. However, neither optimality on the minimizer (at least for particular cases), nor the sensitivity of the parameters, nor the robustness of this approach against standard ones has been clearly discussed before. This paper focuses on the case where \(L>1\), \(K>1\) and the segmentation problem is handled using graph cuts. Noticeably, some properties of the considered energy function are demonstrated, such as global optimality when \(L=2\) and \(K>1\), the link with majority voting and the link with naive Bayes segmentation. Experiments on synthetic and real images depict superior segmentation performance and better robustness against noisy observations.New robust PCA for outliers and heavy sparse noises' detection via affine transformation, the \(L_{\ast, w}\) and \(L_{2,1}\) norms, and spatial weight matrix in high-dimensional images: from the perspective of signal processinghttps://www.zbmath.org/1483.684362022-05-16T20:40:13.078697Z"Liang, Peidong"https://www.zbmath.org/authors/?q=ai:liang.peidong"Likassa, Habte Tadesse"https://www.zbmath.org/authors/?q=ai:likassa.habte-tadesse"Zhang, Chentao"https://www.zbmath.org/authors/?q=ai:zhang.chentao"Guo, Jielong"https://www.zbmath.org/authors/?q=ai:guo.jielongSummary: In this paper, we propose a novel robust algorithm for image recovery via affine transformations, the weighted nuclear, \( L_{\ast, w}\), and the \(L_{2,1}\) norms. The new method considers the spatial weight matrix to account the correlated samples in the data, the \(L_{2,1}\) norm to tackle the dilemma of extreme values in the high-dimensional images, and the \(L_{\ast, w}\) norm newly added to alleviate the potential effects of outliers and heavy sparse noises, enabling the new approach to be more resilient to outliers and large variations in the high-dimensional images in signal processing. The determination of the parameters is involved, and the affine transformations are cast as a convex optimization problem. To mitigate the computational complexity, alternating iteratively reweighted direction method of multipliers (ADMM) method is utilized to derive a new set of recursive equations to update the optimization variables and the affine transformations iteratively in a round-robin manner. The new algorithm is superior to the state-of-the-art works in terms of accuracy on various public databases.Unsupervised domain adaptation in the wild via disentangling representation learninghttps://www.zbmath.org/1483.684372022-05-16T20:40:13.078697Z"Li, Haoliang"https://www.zbmath.org/authors/?q=ai:li.haoliang"Wan, Renjie"https://www.zbmath.org/authors/?q=ai:wan.renjie"Wang, Shiqi"https://www.zbmath.org/authors/?q=ai:wang.shiqi"Kot, Alex C."https://www.zbmath.org/authors/?q=ai:kot.alex-chichungSummary: Most recently proposed unsupervised domain adaptation algorithms attempt to learn domain invariant features by confusing a domain classifier through adversarial training. In this paper, we argue that this may not be an optimal solution in the real-world setting (a.k.a. in the wild) as the difference in terms of label information between domains has been largely ignored. As labeled instances are not available in the target domain in unsupervised domain adaptation tasks, it is difficult to explicitly capture the label difference between domains. To address this issue, we propose to learn a disentangled latent representation based on implicit autoencoders. In particular, a latent representation is disentangled into a global code and a local code. The global code is capturing category information via an encoder with a prior, and the local code is transferable across domains, which captures the ``style'' related information via an implicit decoder. Experimental results on digit recognition, object recognition and semantic segmentation demonstrate the effectiveness of our proposed method.Deep human-interaction and association by graph-based learning for multiple object tracking in the wildhttps://www.zbmath.org/1483.684382022-05-16T20:40:13.078697Z"Ma, Cong"https://www.zbmath.org/authors/?q=ai:ma.cong"Yang, Fan"https://www.zbmath.org/authors/?q=ai:yang.fan.4|yang.fan.1|yang.fan.2|yang.fan.6"Li, Yuan"https://www.zbmath.org/authors/?q=ai:li.yuan|li.yuan.2|li.yuan.3|li.yuan.1"Jia, Huizhu"https://www.zbmath.org/authors/?q=ai:jia.huizhu"Xie, Xiaodong"https://www.zbmath.org/authors/?q=ai:xie.xiaodong"Gao, Wen"https://www.zbmath.org/authors/?q=ai:gao.wenSummary: Multiple Object Tracking (MOT) in the wild has a wide range of applications in surveillance retrieval and autonomous driving. Tracking-by-Detection has become a mainstream solution in MOT, which is composed of feature extraction and data association. Most of the existing methods focus on extracting targets' individual features and optimizing the association by hand-crafted algorithms. In this paper, we specially consider the interrelation cue between targets and we propose Human-Interaction Model (HIM) to extract interaction features between the tracked target and its surrounding. The interaction model has more discriminative features to distinguish objects, especially in crowded (dense) scene. Meanwhile we propose an efficient end-to-end model, Deep Association Network (DAN), to optimize the association with graph-based learning mechanism. Both HIM and DAN are constructed by three kinds of deep networks, which include Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and Graph Neural Network (GNN). The CNNs extract appearance features from bounding box images, the RNNs encoder motion features from historical positions of trajectory. And then the GNNs aim to extract interaction features and optimize graph structure to associate the objects in different frames. In addition, we present a novel end-to-end training strategy for Deep Association Network and Human-Interaction Model. Our experimental results demonstrate performance of our method reaches the state-of-the-art on MOT15, MOT16 and DukeMTMCT datasets.Image matching from handcrafted to deep features: a surveyhttps://www.zbmath.org/1483.684392022-05-16T20:40:13.078697Z"Ma, Jiayi"https://www.zbmath.org/authors/?q=ai:ma.jiayi"Jiang, Xingyu"https://www.zbmath.org/authors/?q=ai:jiang.xingyu"Fan, Aoxiang"https://www.zbmath.org/authors/?q=ai:fan.aoxiang"Jiang, Junjun"https://www.zbmath.org/authors/?q=ai:jiang.junjun"Yan, Junchi"https://www.zbmath.org/authors/?q=ai:yan.junchiSummary: As a fundamental and critical task in various visual applications, image matching can identify then correspond the same or similar structure/content from two or more images. Over the past decades, growing amount and diversity of methods have been proposed for image matching, particularly with the development of deep learning techniques over the recent years. However, it may leave several open questions about which method would be a suitable choice for specific applications with respect to different scenarios and task requirements and how to design better image matching methods with superior performance in accuracy, robustness and efficiency. This encourages us to conduct a comprehensive and systematic review and analysis for those classical and latest techniques. Following the feature-based image matching pipeline, we first introduce feature detection, description, and matching techniques from handcrafted methods to trainable ones and provide an analysis of the development of these methods in theory and practice. Secondly, we briefly introduce several typical image matching-based applications for a comprehensive understanding of the significance of image matching. In addition, we also provide a comprehensive and objective comparison of these classical and latest techniques through extensive experiments on representative datasets. Finally, we conclude with the current status of image matching technologies and deliver insightful discussions and prospects for future works. This survey can serve as a reference for (but not limited to) researchers and engineers in image matching and related fields.Necessary and sufficient polynomial constraints on compatible triplets of essential matriceshttps://www.zbmath.org/1483.684402022-05-16T20:40:13.078697Z"Martyushev, E. V."https://www.zbmath.org/authors/?q=ai:martyushev.evgeniy-vSummary: The essential matrix incorporates relative rotation and translation parameters of two calibrated cameras. The well-known algebraic characterization of essential matrices, i.e. necessary and sufficient conditions under which an arbitrary matrix (of rank two) becomes essential, consists of a single matrix equation of degree three. Based on this equation, a number of efficient algorithmic solutions to different relative pose estimation problems have been proposed in the last two decades. In three views, a possible way to describe the geometry of three calibrated cameras comes from considering compatible triplets of essential matrices. The compatibility is meant the correspondence of a triplet to a certain configuration of calibrated cameras. The main goal of this paper is to give an algebraic characterization of compatible triplets of essential matrices. Specifically, we propose necessary and sufficient polynomial constraints on a triplet of real rank-two essential matrices that ensure its compatibility. The constraints are given in the form of six cubic matrix equations, one quartic and one sextic scalar equations. An important advantage of the proposed constraints is their sufficiency even in the case of cameras with collinear centers. The applications of the constraints may include relative camera pose estimation in three and more views, averaging of essential matrices for incremental structure from motion, multiview camera auto-calibration, etc.Renormalization for initialization of rolling shutter visual-inertial odometryhttps://www.zbmath.org/1483.684412022-05-16T20:40:13.078697Z"Micusik, Branislav"https://www.zbmath.org/authors/?q=ai:micusik.branislav"Evangelidis, Georgios"https://www.zbmath.org/authors/?q=ai:evangelidis.georgios-dSummary: In this paper we deal with the initialization problem of a visual-inertial odometry system with rolling shutter cameras. Initialization is a prerequisite for using inertial signals and fusing them with visual data. We propose a novel statistical solution to the initialization problem on visual and inertial data simultaneously, by casting it into the renormalization scheme of Kanatani. The renormalization is an optimization scheme which intends to reduce the inherent statistical bias of common linear systems. We derive and present the necessary steps and methodology specific to the initialization problem. Extensive evaluations on ground truth exhibit superior performance and a gain in accuracy of up to \(20\%\) over the originally proposed Least Squares solution. The renormalization performs similarly to the optimal Maximum Likelihood estimate, despite arriving at the solution by different means. With this paper we are adding to the set of Computer Vision problems which can be cast into the renormalization scheme.Temporally coherent general dynamic scene reconstructionhttps://www.zbmath.org/1483.684422022-05-16T20:40:13.078697Z"Mustafa, Armin"https://www.zbmath.org/authors/?q=ai:mustafa.armin"Volino, Marco"https://www.zbmath.org/authors/?q=ai:volino.marco"Kim, Hansung"https://www.zbmath.org/authors/?q=ai:kim.hansung"Guillemaut, Jean-Yves"https://www.zbmath.org/authors/?q=ai:guillemaut.jean-yves"Hilton, Adrian"https://www.zbmath.org/authors/?q=ai:hilton.adrianSummary: Existing techniques for dynamic scene reconstruction from multiple wide-baseline cameras primarily focus on reconstruction in controlled environments, with fixed calibrated cameras and strong prior constraints. This paper introduces a general approach to obtain a 4D representation of complex dynamic scenes from multi-view wide-baseline static or moving cameras without prior knowledge of the scene structure, appearance, or illumination. Contributions of the work are: an automatic method for initial coarse reconstruction to initialize joint estimation; sparse-to-dense temporal correspondence integrated with joint multi-view segmentation and reconstruction to introduce temporal coherence; and a general robust approach for joint segmentation refinement and dense reconstruction of dynamic scenes by introducing shape constraint. Comparison with state-of-the-art approaches on a variety of complex indoor and outdoor scenes, demonstrates improved accuracy in both multi-view segmentation and dense reconstruction. This paper demonstrates unsupervised reconstruction of complete temporally coherent 4D scene models with improved non-rigid object segmentation and shape reconstruction and its application to various applications such as free-view rendering and virtual reality.Deformable image registration based on functions of bounded generalized deformationhttps://www.zbmath.org/1483.684432022-05-16T20:40:13.078697Z"Nie, Ziwei"https://www.zbmath.org/authors/?q=ai:nie.ziwei"Li, Chen"https://www.zbmath.org/authors/?q=ai:li.chen.1|li.chen"Liu, Hairong"https://www.zbmath.org/authors/?q=ai:liu.hairong"Yang, Xiaoping"https://www.zbmath.org/authors/?q=ai:yang.xiaopingSummary: Functions of bounded deformation (BD) are widely used in the theory of elastoplasticity to describe the possibly discontinuous displacement fields inside elastoplastic bodies. BD functions have been proved suitable for deformable image registration, the goal of which is to find the displacement field between a moving image and a fixed image. Recently BD functions have been generalized to symmetric tensor fields of bounded generalized variation. In this paper, we focus on the first-order symmetric tensor fields, i.e., vector-valued functions, of bounded generalized variation. We specify these functions as functions of bounded generalized deformation (BGD) since BGD functions are natural generalizations of BD functions. We propose a BGD model for deformable image registration problems by regarding concerned displacement fields as BGD functions. BGD model employs not only the first-order but also higher-order coupling information of components of the displacement field. It turns out that BGD model allows for jump discontinuities of displacements while, in contrast to BD model, at the same time is able to employ higher-order derivatives of displacements in smooth regions. As a result, BGD model tends to capture possible discontinuities of displacements appeared around edges of the target objects while keep the smoothness of displacements inside the target objects as well. This characteristic enables BGD model to obtain better registration results than BD model and other variational models. To our knowledge, it is the first time in literature to use BGD functions for image registration. A first-order adaptive primal-dual algorithm is adopted to solve the proposed BGD model. Numerical experiments on 2D and 3D images show both effectiveness and advantages of BGD model.Complete singularity analysis for the perspective-four-point problemhttps://www.zbmath.org/1483.684442022-05-16T20:40:13.078697Z"Pascual-Escudero, Beatriz"https://www.zbmath.org/authors/?q=ai:pascual-escudero.beatriz"Nayak, Abhilash"https://www.zbmath.org/authors/?q=ai:nayak.abhilash"Briot, Sébastien"https://www.zbmath.org/authors/?q=ai:briot.sebastien"Kermorgant, Olivier"https://www.zbmath.org/authors/?q=ai:kermorgant.olivier"Martinet, Philippe"https://www.zbmath.org/authors/?q=ai:martinet.philippe"El Din, Mohab Safey"https://www.zbmath.org/authors/?q=ai:safey-el-din.mohab"Chaumette, François"https://www.zbmath.org/authors/?q=ai:chaumette.francoisSummary: This paper is concerned with pose estimation and visual servoing from four points. We determine the configurations for which the corresponding Jacobian matrix becomes singular, leading to inaccurate and unstable results. Using an adequate representation and algebraic geometry, it is shown that, for any orientation between the camera and the object, there are always two to six singular locations of the camera in the generic case where the points are not coplanar, corresponding to the intersection of four cylinders. The particular case where the four points are coplanar is also characterized. Furthermore, some realistic example configurations are considered to substantiate the theory and to demonstrate failure cases in pose estimation and image-based visual servoing when the camera approaches a singularity.Pix2Shape: towards unsupervised learning of 3D scenes from images using a view-based representationhttps://www.zbmath.org/1483.684452022-05-16T20:40:13.078697Z"Rajeswar, Sai"https://www.zbmath.org/authors/?q=ai:rajeswar.sai"Mannan, Fahim"https://www.zbmath.org/authors/?q=ai:mannan.fahim"Golemo, Florian"https://www.zbmath.org/authors/?q=ai:golemo.florian"Parent-Lévesque, Jérôme"https://www.zbmath.org/authors/?q=ai:parent-levesque.jerome"Vazquez, David"https://www.zbmath.org/authors/?q=ai:vazquez.david"Nowrouzezahrai, Derek"https://www.zbmath.org/authors/?q=ai:nowrouzezahrai.derek"Courville, Aaron"https://www.zbmath.org/authors/?q=ai:courville.aaron-cSummary: We infer and generate three-dimensional (3D) scene information from a single input image and without supervision. This problem is under-explored, with most prior work relying on supervision from, e.g., 3D ground-truth, multiple images of a scene, image silhouettes or key-points. We propose \textbf{Pix2Shape}, an approach to solve this problem with four component: (i) an encoder that infers the latent 3D representation from an image, (ii) a decoder that generates an explicit 2.5D surfel-based reconstruction of a scene -- from the latent code -- (iii) a differentiable renderer that synthesizes a 2D image from the surfel representation, and (iv) a critic network trained to discriminate between images generated by the decoder-renderer and those from a training distribution. Pix2Shape can generate complex 3D scenes that scale with the view-dependent on-screen resolution, unlike representations that capture world-space resolution, i.e., voxels or meshes. We show that Pix2Shape learns a consistent scene representation in its encoded latent space, and that the decoder can then be applied to this latent representation in order to synthesize the scene from a novel viewpoint. We evaluate Pix2Shape with experiments on the ShapeNet dataset as well as on a novel benchmark we developed -- called 3D-IQTT -- to evaluate models based on their ability to enable 3D spatial reasoning. Qualitative and quantitative evaluation demonstrate Pix2Shape's ability to solve scene reconstruction, generation and understanding tasks.Light structure from pin motion: geometric point light source calibrationhttps://www.zbmath.org/1483.684462022-05-16T20:40:13.078697Z"Santo, Hiroaki"https://www.zbmath.org/authors/?q=ai:santo.hiroaki"Waechter, Michael"https://www.zbmath.org/authors/?q=ai:waechter.michael"Lin, Wen-Yan"https://www.zbmath.org/authors/?q=ai:lin.wen-yan"Sugano, Yusuke"https://www.zbmath.org/authors/?q=ai:sugano.yusuke"Matsushita, Yasuyuki"https://www.zbmath.org/authors/?q=ai:matsushita.yasuyukiSummary: We present a method for geometric point light source calibration. Unlike prior works that use Lambertian spheres, mirror spheres, or mirror planes, we use a calibration target consisting of a plane and small shadow casters at unknown positions above the plane. We show that shadow observations from a moving calibration target under a fixed light follow the principles of pinhole camera geometry and epipolar geometry, allowing joint recovery of the light position and 3D shadow caster positions, equivalent to how conventional structure from motion jointly recovers camera parameters and 3D feature positions from observed 2D features. Moreover, we devised a unified light model that works with nearby point lights as well as distant light in one common framework. Our evaluation shows that our method yields light estimates that are stable and more accurate than existing techniques while having a much simpler setup and requiring less manual labor.A camera model for line-scan cameras with telecentric lenseshttps://www.zbmath.org/1483.684472022-05-16T20:40:13.078697Z"Steger, Carsten"https://www.zbmath.org/authors/?q=ai:steger.carsten"Ulrich, Markus"https://www.zbmath.org/authors/?q=ai:ulrich.markusSummary: We propose a camera model for line-scan cameras with telecentric lenses. The camera model assumes a linear relative motion with constant velocity between the camera and the object. It allows to model lens distortions, while supporting arbitrary positions of the line sensor with respect to the optical axis. We comprehensively examine the degeneracies of the camera model and propose methods to handle them. Furthermore, we examine the relation of the proposed camera model to affine cameras. In addition, we propose an algorithm to calibrate telecentric line-scan cameras using a planar calibration object. We perform an extensive evaluation of the proposed camera model that establishes the validity and accuracy of the proposed model. We also show that even for lenses with very small lens distortions, the distortions are statistically highly significant. Therefore, they cannot be omitted in real-world applications.Adaptive dimension-discriminative low-rank tensor recovery for computational hyperspectral imaginghttps://www.zbmath.org/1483.684482022-05-16T20:40:13.078697Z"Wang, Lizhi"https://www.zbmath.org/authors/?q=ai:wang.lizhi"Zhang, Shipeng"https://www.zbmath.org/authors/?q=ai:zhang.shipeng"Huang, Hua"https://www.zbmath.org/authors/?q=ai:huang.huaSummary: Exploiting the prior information is fundamental for image reconstruction in computational hyperspectral imaging (CHI). Existing methods usually unfold the 3D signal as a 1D vector and then handle the prior information among different dimensions in an indiscriminative manner, which inevitably ignores the high-dimensionality nature of the hyperspectral image (HSI) and thus results in poor reconstruction performance. In this paper, we propose a high-order tensor optimization based reconstruction method to boost the quality of CHI. Specifically, we first propose an adaptive dimension-discriminative low-rank tensor recovery (ADLTR) model to exploit the high-dimensionality prior of HSI faithfully. In the ADLTR model, we utilize the 3D tensors as the basic elements to fundamentally preserve the structure information in the spatial and spectral dimensions, introduce a dimension-discriminative low-rankness model to fully characterize the prior in the basic elements, and propose a weight estimation strategy by adaptively exploiting the diversity in each dimension. Then, we develop an optimization framework for the CHI reconstruction by integrating the structure prior in ADLTR with the system imaging principle, which is finally solved via the alternating minimization scheme. Extensive experiments on both synthetic and real data demonstrate that our method outperforms state-of-the-art methods.Evaluating visual properties via robust HodgeRankhttps://www.zbmath.org/1483.684492022-05-16T20:40:13.078697Z"Xu, Qianqian"https://www.zbmath.org/authors/?q=ai:xu.qianqian"Xiong, Jiechao"https://www.zbmath.org/authors/?q=ai:xiong.jiechao"Cao, Xiaochun"https://www.zbmath.org/authors/?q=ai:cao.xiaochun"Huang, Qingming"https://www.zbmath.org/authors/?q=ai:huang.qingming"Yao, Yuan"https://www.zbmath.org/authors/?q=ai:yao.yuanSummary: Nowadays, how to effectively evaluate visual properties has become a popular topic for fine-grained visual comprehension. In this paper we study the problem of how to estimate such visual properties from a ranking perspective with the help of the annotators from online crowdsourcing platforms. The main challenges of our task are two-fold. On one hand, the annotations often contain contaminated information, where a small fraction of label flips might ruin the global ranking of the whole dataset. On the other hand, considering the large data capacity, the annotations are often far from being complete. What is worse, there might even exist imbalanced annotations where a small subset of samples are frequently annotated. Facing such challenges, we propose a robust ranking framework based on the principle of Hodge decomposition of imbalanced and incomplete ranking data. According to the HodgeRank theory, we find that the major source of the contamination comes from the cyclic ranking component of the Hodge decomposition. This leads us to an outlier detection formulation as sparse approximations of the cyclic ranking projection. Taking a step further, it facilitates a novel outlier detection model as Huber's LASSO in robust statistics. Moreover, simple yet scalable algorithms are developed based on Linearized Bregman Iteration to achieve an even less biased estimator. Statistical consistency of outlier detection is established in both cases under nearly the same conditions. Our studies are supported by experiments with both simulated examples and real-world data. The proposed framework provides us a promising tool for robust ranking with large scale crowdsourcing data arising from computer vision.Adaptive channel selection for robust visual object tracking with discriminative correlation filtershttps://www.zbmath.org/1483.684502022-05-16T20:40:13.078697Z"Xu, Tianyang"https://www.zbmath.org/authors/?q=ai:xu.tianyang"Feng, Zhenhua"https://www.zbmath.org/authors/?q=ai:feng.zhenhua"Wu, Xiao-Jun"https://www.zbmath.org/authors/?q=ai:wu.xiaojun.1"Kittler, Josef"https://www.zbmath.org/authors/?q=ai:kittler.josefSummary: Discriminative Correlation Filters (DCF) have been shown to achieve impressive performance in visual object tracking. However, existing DCF-based trackers rely heavily on learning regularised appearance models from invariant image feature representations. To further improve the performance of DCF in accuracy and provide a parsimonious model from the attribute perspective, we propose to gauge the relevance of multi-channel features for the purpose of channel selection. This is achieved by assessing the information conveyed by the features of each channel as a group, using an adaptive group elastic net inducing independent sparsity and temporal smoothness on the DCF solution. The robustness and stability of the learned appearance model are significantly enhanced by the proposed method as the process of channel selection performs implicit spatial regularisation. We use the augmented Lagrangian method to optimise the discriminative filters efficiently. The experimental results obtained on a number of well-known benchmarking datasets demonstrate the effectiveness and stability of the proposed method. A superior performance over the state-of-the-art trackers is achieved using less than 10\% deep feature channels.Learning adaptive classifiers synthesis for generalized few-shot learninghttps://www.zbmath.org/1483.684512022-05-16T20:40:13.078697Z"Ye, Han-Jia"https://www.zbmath.org/authors/?q=ai:ye.hanjia"Hu, Hexiang"https://www.zbmath.org/authors/?q=ai:hu.hexiang"Zhan, De-Chuan"https://www.zbmath.org/authors/?q=ai:zhan.dechuanSummary: Object recognition in the real-world requires handling long-tailed or even open-ended data. An ideal visual system needs to recognize the populated head visual concepts reliably and meanwhile efficiently learn about emerging new tail categories with a few training instances. Class-balanced many-shot learning and few-shot learning tackle one side of this problem, by either learning strong classifiers for head or learning to learn few-shot classifiers for the tail. In this paper, we investigate the problem of \textit{generalized few-shot learning (GFSL)} -- a model during the deployment is required to learn about tail categories with few shots and simultaneously classify the head classes. We propose the ClAssifier SynThesis LEarning (\textsc{Castle}), a learning framework that learns how to synthesize calibrated few-shot classifiers in addition to the multi-class classifiers of head classes with a shared neural dictionary, shedding light upon the \textit{inductive} GFSL. Furthermore, we propose an adaptive version of \textsc{Castle} (\textsc{aCastle}) that adapts the head classifiers conditioned on the incoming tail training examples, yielding a framework that allows effective backward knowledge transfer. As a consequence, \textsc{aCastle} can handle GFSL with classes from heterogeneous domains effectively. \textsc{Castle} and \textsc{aCastle} demonstrate superior performances than existing GFSL algorithms and strong baselines on \textit{Mini}ImageNet as well as \textit{Tiered}ImageNet datasets. More interestingly, they outperform previous state-of-the-art methods when evaluated with standard few-shot learning criteria.OCNet: object context for semantic segmentationhttps://www.zbmath.org/1483.684522022-05-16T20:40:13.078697Z"Yuan, Yuhui"https://www.zbmath.org/authors/?q=ai:yuan.yuhui"Huang, Lang"https://www.zbmath.org/authors/?q=ai:huang.lang"Guo, Jianyuan"https://www.zbmath.org/authors/?q=ai:guo.jianyuan"Zhang, Chao"https://www.zbmath.org/authors/?q=ai:zhang.chao.8|zhang.chao.3|zhang.chao.2|zhang.chao.7|zhang.chao.5|zhang.chao|zhang.chao.6|zhang.chao.1"Chen, Xilin"https://www.zbmath.org/authors/?q=ai:chen.xilin"Wang, Jingdong"https://www.zbmath.org/authors/?q=ai:wang.jingdongSummary: In this paper, we address the semantic segmentation task with a new context aggregation scheme named \textit{object context}, which focuses on enhancing the role of object information. Motivated by the fact that the category of each pixel is inherited from the object it belongs to, we define the object context for each pixel as the set of pixels that belong to the same category as the given pixel in the image. We use a binary relation matrix to represent the relationship between all pixels, where the value one indicates the two selected pixels belong to the same category and zero otherwise. We propose to use a dense relation matrix to serve as a surrogate for the binary relation matrix. The dense relation matrix is capable to emphasize the contribution of object information as the relation scores tend to be larger on the object pixels than the other pixels. Considering that the dense relation matrix estimation requires quadratic computation overhead and memory consumption w.r.t. the input size, we propose an efficient interlaced sparse self-attention scheme to model the dense relations between any two of all pixels via the combination of two sparse relation matrices. To capture richer context information, we further combine our interlaced sparse self-attention scheme with the conventional multi-scale context schemes including pyramid pooling (Zhao et al. 2017) and atrous spatial pyramid pooling (Chen et al. 2018). We empirically show the advantages of our approach with competitive performances on five challenging benchmarks including: Cityscapes, ADE20K, LIP, PASCAL-Context and COCO-Stuff.Unsupervised assignment flow: label learning on feature manifolds by spatially regularized geometric assignmenthttps://www.zbmath.org/1483.684532022-05-16T20:40:13.078697Z"Zern, Artjom"https://www.zbmath.org/authors/?q=ai:zern.artjom"Zisler, Matthias"https://www.zbmath.org/authors/?q=ai:zisler.matthias"Petra, Stefania"https://www.zbmath.org/authors/?q=ai:petra.stefania"Schnörr, Christoph"https://www.zbmath.org/authors/?q=ai:schnorr.christophSummary: This paper introduces the \textit{unsupervised assignment flow} that couples the assignment flow for supervised image labeling [\textit{F. Åström} et al., J. Math. Imaging Vis. 58, No. 2, 211--238 (2017; Zbl 1460.62101)] with Riemannian gradient flows for label evolution on feature manifolds. The latter component of the approach encompasses extensions of state-of-the-art clustering approaches to manifold-valued data. Coupling label evolution with the spatially regularized assignment flow induces a sparsifying effect that enables to learn compact label dictionaries in an unsupervised manner. Our approach alleviates the requirement for supervised labeling to have proper labels at hand, because an initial set of labels can evolve and adapt to better values while being assigned to given data. The separation between feature and assignment manifolds enables the flexible application which is demonstrated for three scenarios with manifold-valued features. Experiments demonstrate a beneficial effect in both directions: adaptivity of labels improves image labeling, and steering label evolution by spatially regularized assignments leads to proper labels, because the assignment flow for supervised labeling is exactly used without any approximation for label learning.Beyond covariance: SICE and kernel based visual feature representationhttps://www.zbmath.org/1483.684542022-05-16T20:40:13.078697Z"Zhang, Jianjia"https://www.zbmath.org/authors/?q=ai:zhang.jianjia"Wang, Lei"https://www.zbmath.org/authors/?q=ai:wang.lei.6|wang.lei.8|wang.lei.7|wang.lei.16|wang.lei|wang.lei.15|wang.lei.11|wang.lei.19|wang.lei.9|wang.lei.5|wang.lei.18|wang.lei.4|wang.lei.17|wang.lei.14"Zhou, Luping"https://www.zbmath.org/authors/?q=ai:zhou.luping"Li, Wanqing"https://www.zbmath.org/authors/?q=ai:li.wanqingSummary: The past several years have witnessed increasing research interest on covariance-based feature representation. Originally proposed as a region descriptor, it has now been used as a general representation in various recognition tasks, demonstrating promising performance. However, covariance matrix has some inherent shortcomings such as singularity in the case of small sample, limited capability in modeling complicated feature relationship, and a single, fixed form of representation. To achieve better recognition performance, this paper argues that more capable and flexible symmetric positive definite (SPD)-matrix-based representation shall be explored, and this is attempted in this work by exploiting prior knowledge of data and nonlinear representation. Specifically, to better deal with the issues of small number of feature vectors and high feature dimensionality, we propose to exploit the structure sparsity of visual features and exemplify sparse inverse covariance estimate as a new feature representation. Furthermore, to effectively model complicated feature relationship, we propose to directly compute kernel matrix over feature dimensions, leading to a robust, flexible and open framework of SPD-matrix-based representation. Through theoretical analysis and experimental study, the proposed two representations well demonstrate their advantages over the covariance counterpart in skeletal human action recognition, image set classification and object classification tasks.3D scene reconstruction with an un-calibrated light field camerahttps://www.zbmath.org/1483.684552022-05-16T20:40:13.078697Z"Zhang, Qi"https://www.zbmath.org/authors/?q=ai:zhang.qi.1|zhang.qi.2|zhang.qi-shuhuason|zhang.qi.4|zhang.qi"Li, Hongdong"https://www.zbmath.org/authors/?q=ai:li.hongdong"Wang, Xue"https://www.zbmath.org/authors/?q=ai:wang.xue"Wang, Qing"https://www.zbmath.org/authors/?q=ai:wang.qing|wang.qing.1|wang.qing.3|wang.qing.2|wang.qing.4Summary: This paper is concerned with the problem of multi-view 3D reconstruction with an un-calibrated micro-lens array based light field camera. To acquire 3D Euclidean reconstruction, existing approaches commonly apply the calibration with a checkerboard and motion estimation from static scenes in two steps. Self-calibration is the process of simultaneously estimating intrinsic and extrinsic parameters directly from un-calibrated light fields without the help of a checkerboard. While the self-calibration technique for conventional (pinhole) camera is well understood, how to extend it to light field camera remains a challenging task. This is primarily due to the ultra-small baseline of the light field camera. We propose an effective self-calibration method for a light field camera for automatic metric reconstruction without a laborious pre-calibration process. In contrast to conventional self-calibration, we show how such a self-calibration method can be made numerically stable, by exploiting the regularity and measurement redundancies unique for the light field camera. The proposed method is built upon the derivation of a novel ray-space homography constraint (RSHC) using Plücker parameterization as well as a ray-space infinity homography (RSIH). We also propose a new concept of ``rays of the absolute conic (RAC)'' defined as a special quadric in 5D projective space \(\mathbb{P}^5\). A set of new equations are established and solved for self-calibration and 3D metric reconstruction specifically designed for a light field camera. We validate the efficacy of the proposed method on both synthetic and real light fields, and have obtained superior results in both accuracy and robustness.Proof assistants for natural language semanticshttps://www.zbmath.org/1483.684562022-05-16T20:40:13.078697Z"Chatzikyriakidis, Stergios"https://www.zbmath.org/authors/?q=ai:chatzikyriakidis.stergios"Luo, Zhaohui"https://www.zbmath.org/authors/?q=ai:luo.zhaohuiSummary: In this paper we discuss the use of interactive theorem provers (also called proof assistants) in the study of natural language semantics. It is shown that these provide useful platforms for NL semantics and reasoning on the one hand, and allow experiments to be performed on various frameworks and new theories, on the other. In particular, we show how to use Coq, a prominent type theory based proof assistant, to encode type theoretical semantics of various NL phenomena. In this respect, we can encode the NL semantics based on type theory for quantifiers, adjectives, common nouns, and tense, among others, and it is shown that Coq is a powerful engine for checking the formal validity of these accounts as well as a powerful reasoner about the implemented semantics. We further show some toy semantic grammars for formal semantic systems, like the Montagovian Generative Lexicon, Type Theory with Records and neo-Davidsonian semantics. It is also explained that experiments on new theories can be done as well, testing their validity and usefulness. Our aim is to show the importance of using proof assistants as useful tools in natural language reasoning and verification and argue for their wider application in the field.
For the entire collection see [Zbl 1351.68009].Compositional event semantics in pregroup grammarshttps://www.zbmath.org/1483.684572022-05-16T20:40:13.078697Z"Gaudreault, Gabriel"https://www.zbmath.org/authors/?q=ai:gaudreault.gabrielSummary: A derivational approach to event semantics using pregroup grammars as syntactic framework is defined. This system relies on three crucial components: the explicit introduction of event variables which are linked to the basic types of a lexical item's grammatical type; the unification of event variables following a concatenation of two expressions and the associated type contraction; and the correspondence between pregroup orderings and the change of the available event variables associated to a lexical item, which the meaning predicates take scope over.
For the entire collection see [Zbl 1351.68009].A compositional distributional inclusion hypothesishttps://www.zbmath.org/1483.684582022-05-16T20:40:13.078697Z"Kartsaklis, Dimitri"https://www.zbmath.org/authors/?q=ai:kartsaklis.dimitri"Sadrzadeh, Mehrnoosh"https://www.zbmath.org/authors/?q=ai:sadrzadeh.mehrnooshSummary: The distributional inclusion hypothesis provides a pragmatic way of evaluating entailment between word vectors as represented in a distributional model of meaning. In this paper, we extend this hypothesis to the realm of compositional distributional semantics, where meanings of phrases and sentences are computed by composing their word vectors. We present a theoretical analysis for how feature inclusion is interpreted under each composition operator, and propose a measure for evaluating entailment at the phrase/sentence level. We perform experiments on four entailment datasets, showing that intersective composition in conjunction with our proposed measure achieves the highest performance.
For the entire collection see [Zbl 1351.68009].XMG 2: describing description languageshttps://www.zbmath.org/1483.684592022-05-16T20:40:13.078697Z"Petitjean, Simon"https://www.zbmath.org/authors/?q=ai:petitjean.simon"Duchier, Denys"https://www.zbmath.org/authors/?q=ai:duchier.denys"Parmentier, Yannick"https://www.zbmath.org/authors/?q=ai:parmentier.yannickSummary: This paper introduces XMG 2, a modular and extensible tool for various linguistic description tasks. Based on the notion of meta-compilation (that is, compilation of compilers), XMG 2 reuses the main concepts underlying XMG, namely logic programming and constraint satisfaction, to generate on-demand XMG-like compilers by assembling elementary units called bricks. This brick-based definition of compilers permits users to design description languages in a highly flexible way. In particular, it makes it possible to support several levels of linguistic description (e.g. syntax, morphology) within a single description language. XMG 2 aims to offer means for users to easily define description languages that fit as much as possible the linguistic intuition.
For the entire collection see [Zbl 1351.68009].A note about a new geometric solution of the problem of the synchronous tracking of the indices of numerous sensors of different typeshttps://www.zbmath.org/1483.684602022-05-16T20:40:13.078697Z"Akhundov, A. A."https://www.zbmath.org/authors/?q=ai:akhundov.a-a(no abstract)Geometric systems of unbiased representativeshttps://www.zbmath.org/1483.684612022-05-16T20:40:13.078697Z"Banik, Aritra"https://www.zbmath.org/authors/?q=ai:banik.aritra"Bhattacharya, Bhaswar B."https://www.zbmath.org/authors/?q=ai:bhattacharya.bhaswar-b"Bhore, Sujoy"https://www.zbmath.org/authors/?q=ai:bhore.sujoy-kumar"Martínez-Sandoval, Leonardo"https://www.zbmath.org/authors/?q=ai:martinez-sandoval.leonardoSummary: Let \(P\) be a finite point set in \(\mathbb{R}^d\), \(B\) be a bicoloring of \(P\) and \(\mathcal{O}\) be a family of geometric objects (that is, intervals, boxes, balls, etc). An object from \(\mathcal{O}\) is called \textit{balanced with respect to \(B\)} if it contains the same number of points from each color of \(B\). For a collection \(\mathcal{B}\) of bicolorings of \(P\), a \textit{geometric system of unbiased representatives (G-SUR)} is a subset \(\mathcal{O}'\subseteq\mathcal{O}\) such that for any bicoloring \(B\) of \(\mathcal{B}\) there is an object in \(\mathcal{O}'\) that is balanced with respect to \(B\).
We pose and study problems on finding G-SURs. We obtain general bounds on the size of G-SURs consisting of intervals, size-restricted intervals, axis-parallel boxes and Euclidean balls. We show that the G-SUR problem is \textsf{NP}-Hard even in the simple case of points on a line and interval ranges. Furthermore, we study a related problem on determining the size of the largest and smallest \textit{balanced intervals} for points on the real line with a random distribution and coloring.
Our results are a natural extension to a geometric context of the work initiated by \textit{N. Balachandran} et al. [Discrete Math. 341, No. 6, 1732--1739 (2018; Zbl 1384.05118)] on arbitrary systems of unbiased representatives.Polynomial data structure lower bounds in the group modelhttps://www.zbmath.org/1483.684622022-05-16T20:40:13.078697Z"Golovnev, Alexander"https://www.zbmath.org/authors/?q=ai:golovnev.alexander"Posobin, Gleb"https://www.zbmath.org/authors/?q=ai:posobin.gleb"Regev, Oded"https://www.zbmath.org/authors/?q=ai:regev.oded"Weinstein, Omri"https://www.zbmath.org/authors/?q=ai:weinstein.omriMinimum point-overlap labellinghttps://www.zbmath.org/1483.684632022-05-16T20:40:13.078697Z"Higashikawa, Yuya"https://www.zbmath.org/authors/?q=ai:higashikawa.yuya"Imai, Keiko"https://www.zbmath.org/authors/?q=ai:imai.keiko"Shiraga, Takeharu"https://www.zbmath.org/authors/?q=ai:shiraga.takeharu"Sukegawa, Noriyoshi"https://www.zbmath.org/authors/?q=ai:sukegawa.noriyoshi"Yokosuka, Yusuke"https://www.zbmath.org/authors/?q=ai:yokosuka.yusukeSummary: In an application of map labelling to air-traffic control, labels should be placed with as few overlaps as possible since labels include important information about airplanes. Motivated by this application,
\textit{M. de Berg} and \textit{D. H. P. Gerrits} [Comput. Geom. 45, No. 4, 153--168 (2012; Zbl 1242.65042)]
proposed a problem of maximizing the number of free labels (i.e. labels not intersecting with any other label) and developed approximation algorithms for their problem under various label-placement models. In this paper, we propose an alternative problem of minimizing a degree of overlap at a point. Specifically, the objective of this problem is to minimize the maximum of \(\lambda(p)\) over \(p\in\mathbb{R}^2\), where \(\lambda(p)\) is defined as the sum of weights of labels that overlap with a point \(p\). We develop a 4-approximation algorithm by LP-rounding under the 4-position model. We also investigate the case when labels are rectangles with bounded height/length ratios.A filtering heuristic for the computation of minimum-volume enclosing ellipsoidshttps://www.zbmath.org/1483.684642022-05-16T20:40:13.078697Z"Källberg, Linus"https://www.zbmath.org/authors/?q=ai:kallberg.linus"Larsson, Thomas"https://www.zbmath.org/authors/?q=ai:larsson.thomasSummary: We study heuristics to accelerate existing state-of-the-art algorithms for the minimum-volume enclosing ellipsoid problem. We propose a new filtering heuristic that can significantly reduce the number of distance computations performed in algorithms derived from Khachiyan's first-order algorithm. Our experiments indicate that in high dimensions, the filtering heuristic is more effective than the elimination heuristic proposed by Harman and Pronzato. In lower dimensions, the elimination heuristic is superior.
For the entire collection see [Zbl 1377.68004].Minimum-link shortest paths for polygons amidst rectilinear obstacleshttps://www.zbmath.org/1483.684652022-05-16T20:40:13.078697Z"Kim, Mincheol"https://www.zbmath.org/authors/?q=ai:kim.mincheol"Ahn, Hee-Kap"https://www.zbmath.org/authors/?q=ai:ahn.hee-kapSummary: We compute shortest paths connecting two axis-aligned rectilinear simple polygons in the domain consisting of axis-aligned rectilinear obstacles in the plane. The bounding boxes, one defined for each polygon and one defined for each obstacle, are disjoint. We present an algorithm that computes a minimum-link rectilinear shortest path connecting the two polygons and avoiding the obstacles in \(O(n\log n)\) time using \(O(n)\) space, where \(n\) is the total number of vertices of two polygons and obstacles in the domain.The maximum exposure problemhttps://www.zbmath.org/1483.684662022-05-16T20:40:13.078697Z"Kumar, Neeraj"https://www.zbmath.org/authors/?q=ai:kumar.neeraj.1"Sintos, Stavros"https://www.zbmath.org/authors/?q=ai:sintos.stavros"Suri, Subhash"https://www.zbmath.org/authors/?q=ai:suri.subhashSummary: Given a set of points \(P\) and axis-aligned rectangles \(\mathcal{R}\) in the plane, a point \(p\in P\) is called \textit{exposed} if it lies outside all rectangles in \(\mathcal{R}\). In the \textit{max-exposure problem}, given an integer parameter \(k\), we want to delete \(k\) rectangles from \(\mathcal{R}\) so as to maximize the number of exposed points. We show that the problem is \textsf{NP}-hard and assuming plausible complexity conjectures is also hard to approximate even when rectangles in \(\mathcal{R}\) are translates of two fixed rectangles. However, if \(\mathcal{R}\) only consists of translates of a single rectangle, we present a polynomial-time approximation scheme. For range space defined by general rectangles, we present a simple \(O(k)\) bicriteria approximation algorithm; that is by deleting \(O( k^2)\) rectangles, we can expose at least \(\Omega(1/k)\) of the optimal number of points.An algorithm for deciding whether a point set is inside a polygon based on binary searchhttps://www.zbmath.org/1483.684672022-05-16T20:40:13.078697Z"Pan, Ri Hong"https://www.zbmath.org/authors/?q=ai:pan.rihong(no abstract)The prefix Fréchet similarityhttps://www.zbmath.org/1483.684682022-05-16T20:40:13.078697Z"Scheffer, Christian"https://www.zbmath.org/authors/?q=ai:scheffer.christianSummary: We present the \textit{prefix Fréchet similarity} as a new measure for similarity of curves which is e.g. motivated by evacuation analysis and defined as follows. Given two (polygonal) curves \(T\) and \(T'\), we ask for two \textit{prefix curves} of \(T\) and \(T'\) which have a Fréchet distance no larger than a given distance threshold \(\delta \ge 0\) w.r.t. \(L_1\) metric such that the sum of the prefix curves is maximal. As parameterized Fréchet measures as, e.g., the prefix Fréchet similarity are highly unstable w.r.t. to the value of the distance threshold \(\delta\), we give an algorithm that computes exactly the \textit{profile} of the prefix Fréchet similarity, i.e., the complete functional relation between \(\delta\) and the prefix Fréchet similarity of \(T\) and \(T'\). This is the first efficient algorithm for computing exactly the whole profile of a parametrized Fréchet distance.
While the running time of our algorithm for computing the profile of the prefix Fréchet similarity is \(\mathcal{O}\left( n^3 \log n\right)\), we provide a lower bound of \(\varOmega (n^2)\) for the running time of each algorithm computing the profile of the prefix Fréchet similarity, where \(n\) denotes the number of segments on \(T\) and \(T'\). This implies that our running time is at most a near linear factor away from being optimal.
For the entire collection see [Zbl 1408.68014].The prefix Fréchet similarityhttps://www.zbmath.org/1483.684692022-05-16T20:40:13.078697Z"Scheffer, Christian"https://www.zbmath.org/authors/?q=ai:scheffer.christianSummary: We present the \textit{prefix Fréchet similarity} as a new measure for similarity of curves which is for instance motivated by evacuation analysis and defined as follows. Given two (polygonal) curves \(T\) and \(T^\prime\), we ask for two \textit{prefix curves} of \(T\) and \(T^\prime\) which have a Fréchet distance no larger than a given distance threshold \(\delta\geq 0\) with respect to the \(L_1\) metric such that the sum of the lengths of the prefix curves is maximal. As parameterized Fréchet measures as for example the prefix Fréchet similarity are highly unstable regarding the value of the distance threshold \(\delta\), we give an algorithm that computes exactly the \textit{profile} of the prefix Fréchet similarity which means the complete functional relation between \(\delta\) and the prefix Fréchet similarity of \(T\) and \(T^\prime\). This is the first efficient algorithm for computing exactly the whole profile of a parametrized Fréchet distance.
While the running time of our algorithm for computing the profile of the prefix Fréchet similarity is \(\mathcal{O}(n^3\log n)\), we provide a lower bound of \(\Omega(n^2)\) for the running time of any algorithm computing the profile of the prefix Fréchet similarity, where \(n\) denotes the number of segments on \(T\) and \(T^\prime\). This implies that our running time is at most a near linear factor away from being optimal.Realizability of graphs as triangle cover contact graphshttps://www.zbmath.org/1483.684702022-05-16T20:40:13.078697Z"Sultana, Shaheena"https://www.zbmath.org/authors/?q=ai:sultana.shaheena"Rahman, Md. Saidur"https://www.zbmath.org/authors/?q=ai:rahman.md-saidurSummary: Let \(S=\{p_1,p_2,\ldots ,p_n\}\) be a set of pairwise disjoint geometric objects of some type and let \(C=\{c_1,c_2,\ldots ,c_n\}\) be a set of closed objects of some type with the property that each element in \(C\) covers exactly one element in \(S\) and any two elements in \(C\) can intersect only on their boundaries. We call an element in \(S\) a seed and an element in \(C\) a cover. A cover contact graph (CCG) consists of a set of vertices and a set of edges where each of the vertex corresponds to each of the covers and each edge corresponds to a connection between two covers if and only if they touch at their boundaries. A triangle cover contact graph (TCCG) is a cover contact graph whose cover elements are triangles. In this paper, we show that every Halin graph has a realization as a TCCG on a given set of collinear seeds. We introduce a new class of graphs which we call super-Halin graphs. We also show that the classes super-Halin graphs, cubic planar Hamiltonian graphs and \(a\times b\) grid graphs have realizations as TCCGs on collinear seeds. We also show that every complete graph has a realization as a TCCG on any given set of seeds. Note that only trees and cycles are known to be realizable as CCGs and outerplanar graphs are known to be realizable as TCCGs.
For the entire collection see [Zbl 1377.68004].Constrained light deployment for reducing energy consumption in buildingshttps://www.zbmath.org/1483.684712022-05-16T20:40:13.078697Z"Tian, Huamei"https://www.zbmath.org/authors/?q=ai:tian.huamei"Wu, Kui"https://www.zbmath.org/authors/?q=ai:wu.kui"Whitesides, Sue"https://www.zbmath.org/authors/?q=ai:whitesides.sue-h"Feng, Cuiying"https://www.zbmath.org/authors/?q=ai:feng.cuiyingSummary: Lighting systems account for a major part of the energy consumed by large commercial buildings. This paper aims at reducing this energy consumption by defining the Contrained Light Deployment problem. This new problem is related to the classical Art Gallery problem (AGP) in computational geometry. In contrast to AGP, which asks for the minimum number of guards to monitor a polygonal area, our problem, CLDP, poses a new challenging requirement: not only must each point \(p\) have an unobstructed line-of-sight to a light source, but also, the combined illuminance at \(p\) from all light sources must exceed some given threshold value. We provide evidence that our new problem is NP-hard, based on known results for AGP. Then we propose fast heuristics for floor plans shaped like orthogonal polygons, with and without holes. Our problem formulation allows lights to be placed internally, not only at vertices. Our algorithm, which combines ideas from computational geometry, clustering and binary search, computes a set of light placements that satisfies the illumination requirement. The algorithm seeks a light set of minimum size by an iterative binary search procedure that progressively tightens upper and lower bounds.
For the entire collection see [Zbl 1377.68004].Algorithms for colourful simplicial depth and medians in the planehttps://www.zbmath.org/1483.684722022-05-16T20:40:13.078697Z"Zasenko, Olga"https://www.zbmath.org/authors/?q=ai:zasenko.olga"Stephen, Tamon"https://www.zbmath.org/authors/?q=ai:stephen.tamonSummary: The colourful simplicial depth (CSD) of a point \(x \in \mathbb {R}^2\) relative to a configuration \(P=(P^1, P^2, \ldots , P^k)\) of \(n\) points in \(k\) colour classes is exactly the number of closed simplices (triangles) with vertices from 3 different colour classes that contain \(x\) in their convex hull. We consider the problems of efficiently computing the colourful simplicial depth of a point \(x\), and of finding a point in \(x \in \mathbb {R}^2\), called a median, that maximizes colourful simplicial depth.
For computing the colourful simplicial depth of \(x\), our algorithm runs in time \(O\left( n \log {n} + k n \right) \) in general, and \(O\)(kn) if the points are sorted around \(x\). For finding the colourful median, we get a time of \(O(n^4)\). For comparison, the running times of the best known algorithm for the monochrome version of these problems are \(O\left( n \log {n} \right) \) in general, improving to \(O(n)\) if the points are sorted around \(x\) for monochrome depth, and \(O(n^4)\) for finding a monochrome median.
For the entire collection see [Zbl 1377.68004].Binary handwriting image enhancement by directional field-guided morphologyhttps://www.zbmath.org/1483.684732022-05-16T20:40:13.078697Z"Adamski, Marcin"https://www.zbmath.org/authors/?q=ai:adamski.marcin"Sarnacki, Kacper"https://www.zbmath.org/authors/?q=ai:sarnacki.kacper"Saeed, Khalid"https://www.zbmath.org/authors/?q=ai:saeed.khalidSummary: This paper proposes a technique for processing handwriting images. The algorithm used in this study is an improvement to the binarisation process. The enhancement focuses on correcting damaged lines that usually arise during the binarisation process, particularly, spurious holes, discontinuities, and eroded boundaries. The presented method uses a morphogical dilation operation in which a structural element is locally adapted using the information from a directional field. The adaptation process involves a new criterion for selecting orientation and shape of a structural element that combines directional field, a coherence measure, and a circular histogram. The field was computed using gradient-based approach, and a method based on a Hessian matrix. During experiments, our method was applied to the output of selected binarisation algorithms. The experiments were conducted on grayscale signature images (from the CEDAR database) and handwriting images (from the DIBCO database). The results of the algorithm were compared to the results of standard morphological operations (dilation, erosion, opening, and closing) and median filtering. The experiments show that the proposed method achieves significant accuracy improvement (8\%--12\% for Acc, 15\%--32\% for Acc2 measures), reduces the number of unwanted artefacts, and produces images with less distortion compared to those from standard approaches.Evaluation metrics for conditional image generationhttps://www.zbmath.org/1483.684742022-05-16T20:40:13.078697Z"Benny, Yaniv"https://www.zbmath.org/authors/?q=ai:benny.yaniv"Galanti, Tomer"https://www.zbmath.org/authors/?q=ai:galanti.tomer"Benaim, Sagie"https://www.zbmath.org/authors/?q=ai:benaim.sagie"Wolf, Lior"https://www.zbmath.org/authors/?q=ai:wolf.liorSummary: We present two new metrics for evaluating generative models in the class-conditional image generation setting. These metrics are obtained by generalizing the two most popular unconditional metrics: the Inception Score (IS) and the Fréchet Inception Distance (FID). A theoretical analysis shows the motivation behind each proposed metric and links the novel metrics to their unconditional counterparts. The link takes the form of a product in the case of IS or an upper bound in the FID case. We provide an extensive empirical evaluation, comparing the metrics to their unconditional variants and to other metrics, and utilize them to analyze existing generative models, thus providing additional insights about their performance, from unlearned classes to mode collapse.Rooted spanning superpixelshttps://www.zbmath.org/1483.684752022-05-16T20:40:13.078697Z"Chai, Dengfeng"https://www.zbmath.org/authors/?q=ai:chai.dengfengSummary: This paper proposes a new approach for superpixel segmentation. It is formulated as finding a rooted spanning forest of a graph with respect to some roots and a path-cost function. The underlying graph represents an image, the roots serve as seeds for segmentation, each pixel is connected to one seed via a path, the path-cost function measures both the color similarity and spatial closeness between two pixels via a path, and each tree in the spanning forest represents one superpixel. Originating from the evenly distributed seeds, the superpixels are guided by a path-cost function to grow uniformly and adaptively, the pixel-by-pixel growing continues until they cover the whole image. The number of superpixels is controlled by the number of seeds. The connectivity is maintained by region growing. Good performances are assured by connecting each pixel to the similar seed, which are dominated by the path-cost function. It is evaluated by both the superpixel benchmark and supervoxel benchmark. Its performance is ranked as the second among top performing state-of-the-art methods. Moreover, it is much faster than the other superpixel and supervoxel methods.Correction to: ``Rooted spanning superpixels''https://www.zbmath.org/1483.684762022-05-16T20:40:13.078697Z"Chai, Dengfeng"https://www.zbmath.org/authors/?q=ai:chai.dengfengThe author regrets the omission of the reference [\textit{J. E. Vargas-Muñoz} et al., IEEE Trans. Image Process. 28, No. 7, 3477--3489 (2019; Zbl 1483.68477)] in his paper [Int. J. Comput. Vis. 128, No. 12, 2962--2978 (2020; Zbl 1483.68475)].An iterative spanning forest framework for superpixel segmentationhttps://www.zbmath.org/1483.684772022-05-16T20:40:13.078697Z"Vargas-Muñoz, John E."https://www.zbmath.org/authors/?q=ai:vargas-munoz.john-e"Chowdhury, Ananda S."https://www.zbmath.org/authors/?q=ai:chowdhury.ananda-s"Alexandre, Eduardo B."https://www.zbmath.org/authors/?q=ai:alexandre.eduardo-b"Galvão, Felipe L."https://www.zbmath.org/authors/?q=ai:galvao.felipe-l"Vechiatto Miranda, Paulo A."https://www.zbmath.org/authors/?q=ai:vechiatto-miranda.paulo-a"Falcão, Alexandre X."https://www.zbmath.org/authors/?q=ai:falcao.alexandre-xavierEditorial remark: No review copy delivered.Improved clustering algorithms for image segmentation based on non-local information and back projectionhttps://www.zbmath.org/1483.684782022-05-16T20:40:13.078697Z"Zhang, Xiaofeng"https://www.zbmath.org/authors/?q=ai:zhang.xiaofeng"Sun, Yujuan"https://www.zbmath.org/authors/?q=ai:sun.yujuan"Liu, Hui"https://www.zbmath.org/authors/?q=ai:liu.hui.1|liu.hui.2|liu.hui.3|liu.hui.4"Hou, Zhongjun"https://www.zbmath.org/authors/?q=ai:hou.zhongjun"Zhao, Feng"https://www.zbmath.org/authors/?q=ai:zhao.feng"Zhang, Caiming"https://www.zbmath.org/authors/?q=ai:zhang.caimingSummary: Accurate image segmentation is a prerequisite to conducting an image analysis task, and the complexity stemming from the semantic diversity plays a pivotal role in image segmentation. Existing algorithms employed different types of information in the process of segmentation to improve the robustness. However, these algorithms were characterized by a tradeoff between noise removal and detail retention; this is because it is difficult to distinguish image artifacts from details. This paper proposes an improved image segmentation schema and presents two improved clustering algorithms, in which self-similarity and back projection are considered simultaneously to enhance the robustness. With the aid of self-similarity, non-local information is fully exploited, while the original information can be retained by back projection. Extensive experiments on various types of images demonstrate that our algorithms can balance noise restraining and detail retention to improve the adaptation of complex images in segmentation.Formal verification of a floating-point expansion renormalization algorithmhttps://www.zbmath.org/1483.684792022-05-16T20:40:13.078697Z"Boldo, Sylvie"https://www.zbmath.org/authors/?q=ai:boldo.sylvie"Joldes, Mioara"https://www.zbmath.org/authors/?q=ai:joldes.mioara"Muller, Jean-Michel"https://www.zbmath.org/authors/?q=ai:muller.jean-michel"Popescu, Valentina"https://www.zbmath.org/authors/?q=ai:popescu.valentinaSummary: Many numerical problems require a higher computing precision than the one offered by standard floating-point formats. A common way of extending the precision is to use floating-point expansions. As the problems may be critical and as the algorithms used have very complex proofs (many sub-cases), a formal guarantee of correctness is a wish that can now be fulfilled, using interactive theorem proving. In this article we give a formal proof in Coq for one of the algorithms used as a basic brick when computing with floating-point expansions, the renormalization, which is usually applied after each operation. It is a critical step needed to ensure that the resulted expansion has the same property as the input one, and is more ``compressed''. The formal proof uncovered several gaps in the pen-and-paper proof and gives the algorithm a very high level of guarantee.
For the entire collection see [Zbl 1369.68009].Automatic generation of elementary and readable proofs of inequalitieshttps://www.zbmath.org/1483.684802022-05-16T20:40:13.078697Z"Chen, Shi Ping"https://www.zbmath.org/authors/?q=ai:chen.shiping"Zhang, Jing Zhong"https://www.zbmath.org/authors/?q=ai:zhang.jingzhong(no abstract)How to get more out of your oracleshttps://www.zbmath.org/1483.684812022-05-16T20:40:13.078697Z"Cruz-Filipe, Luís"https://www.zbmath.org/authors/?q=ai:cruz-filipe.luis"Larsen, Kim S."https://www.zbmath.org/authors/?q=ai:larsen.kim-skak"Schneider-Kamp, Peter"https://www.zbmath.org/authors/?q=ai:schneider-kamp.peterSummary: Formal verification of large computer-generated proofs often relies on certified checkers based on oracles. We propose a methodology for such proofs, advocating a separation of concerns between formalizing the underlying theory and optimizing the algorithm implemented in the checker, based on the observation that such optimizations can benefit significantly from adequately adapting the oracle.
For the entire collection see [Zbl 1369.68009].Proof certificates in PVShttps://www.zbmath.org/1483.684822022-05-16T20:40:13.078697Z"Gilbert, Frédéric"https://www.zbmath.org/authors/?q=ai:gilbert.fredericSummary: The purpose of this work is to allow the proof system PVS to export proof certificates that can be checked externally. This is done through the instrumentation of PVS to record detailed proofs step by step during the proof search process. At the current stage of this work, proofs can be built for any PVS theory. However, some reasoning steps rely on unverified assumptions. For a restricted fragment of PVS, the proofs are exported to the universal proof checker Dedukti, and the unverified assumptions are proved externally using the automated theorem prover MetiTarski.
For the entire collection see [Zbl 1369.68009].Efficient, verified checking of propositional proofshttps://www.zbmath.org/1483.684832022-05-16T20:40:13.078697Z"Heule, Marijn"https://www.zbmath.org/authors/?q=ai:heule.marijn-j-h"Hunt, Warren jun."https://www.zbmath.org/authors/?q=ai:hunt.warren-jun"Kaufmann, Matt"https://www.zbmath.org/authors/?q=ai:kaufmann.matt"Wetzler, Nathan"https://www.zbmath.org/authors/?q=ai:wetzler.nathan-dSummary: Satisfiability (SAT) solvers -- and software in general -- sometimes have serious bugs. We mitigate these effects by validating the results. Today's SAT solvers emit proofs that can be checked with reasonable efficiency. However, these checkers are not trivial and can have bugs as well. We propose to check proofs using a formally verified program that adds little overhead to the overall process of proof validation. We have implemented a sequence of increasingly efficient, verified checkers using the ACL2 theorem proving system, and we discuss lessons from this effort. This work is already being used in industry and is slated for use in the next SAT competition.
For the entire collection see [Zbl 1369.68009].Automated theory exploration for interactive theorem proving: an introduction to the Hipster systemhttps://www.zbmath.org/1483.684842022-05-16T20:40:13.078697Z"Johansson, Moa"https://www.zbmath.org/authors/?q=ai:johansson.moaSummary: Theory exploration is a technique for automatically discovering new interesting lemmas in a mathematical theory development using testing. In this paper I will present the theory exploration system Hipster, which automatically discovers and proves lemmas about a given set of datatypes and functions in Isabelle/HOL. The development of Hipster was originally motivated by attempts to provide a higher level of automation for proofs by induction. Automating inductive proofs is tricky, not least because they often need auxiliary lemmas which themselves need to be proved by induction. We found that many such basic lemmas can be discovered automatically by theory exploration, and importantly, quickly enough for use in conjunction with an interactive theorem prover without boring the user.
For the entire collection see [Zbl 1369.68009].Formalization of the Lindemann-Weierstrass theoremhttps://www.zbmath.org/1483.684852022-05-16T20:40:13.078697Z"Bernard, Sophie"https://www.zbmath.org/authors/?q=ai:bernard.sophieSummary: This article details a formalization in Coq of the Lindemann-Weierstrass theorem which gives a transcendence criterion for complex numbers: this theorem establishes a link between the linear independence of a set of algebraic numbers and the algebraic independence of the exponentials of these numbers. As we follow Baker's proof, we discuss the difficulties of its formalization and explain how we resolved them in Coq. Most of these difficulties revolve around multivariate polynomials and their relationship with the conjugates of a univariate polynomial. Their study ultimately leads to alternative forms of the fundamental theorem of symmetric polynomials. This formalization uses mainly the Mathcomp library for the part relying on algebra, and the Coquelicot library and the Coq standard library of real numbers for the calculus part.
For the entire collection see [Zbl 1369.68009].How to simulate it in Isabelle: towards formal proof for secure multi-party computationhttps://www.zbmath.org/1483.684862022-05-16T20:40:13.078697Z"Butler, David"https://www.zbmath.org/authors/?q=ai:butler.david-k|butler.david-j|butler.david-g|butler.david-a|butler.david-c|butler.david-lee|butler.david-e"Aspinall, David"https://www.zbmath.org/authors/?q=ai:aspinall.david"Gascón, Adrià"https://www.zbmath.org/authors/?q=ai:gascon.adriaSummary: In cryptography, secure multi-party computation (MPC) protocols allow participants to compute a function jointly while keeping their inputs private. Recent breakthroughs are bringing MPC into practice, solving fundamental challenges for secure distributed computation. Just as with classic protocols for encryption and key exchange, precise guarantees are needed for MPC designs and implementations; any flaw will give attackers a chance to break privacy or correctness. In this paper we present the first (as far as we know) formalisation of some MPC security proofs. These proofs provide probabilistic guarantees in the computational model of security, but have a different character to machine proofs and proof tools implemented so far -- MPC proofs use a simulation approach, in which security is established by showing indistinguishability between execution traces in the actual protocol execution and an ideal world where security is guaranteed by definition. We show that existing machinery for reasoning about probabilistic programs can be adapted to this setting, paving the way to precisely check a new class of cryptography arguments. We implement our proofs using the CryptHOL framework inside Isabelle/HOL.
For the entire collection see [Zbl 1369.68009].Pappus's hexagon theorem in real projective planehttps://www.zbmath.org/1483.684872022-05-16T20:40:13.078697Z"Coghetto, Roland"https://www.zbmath.org/authors/?q=ai:coghetto.rolandSummary: In this article we prove, using Mizar [\textit{G. Bancerek} et al., Lect. Notes Comput. Sci. 9150, 261--279 (2015; Zbl 1417.68201); J. Autom. Reasoning 61, No. 1--4, 9--32 (2018; Zbl 1433.68530)], the Pappus's hexagon theorem in the real projective plane: ``Given one set of collinear points \(A, B, C\), and another set of collinear points \(a, b, c\), then the intersection points \(X, Y, Z\) of line pairs \(Ab\) and \(aB, Ac\) and \(aC, Bc\) and \(bC\) are collinear''.
More precisely, we prove that the structure \texttt{ProjectiveSpace TOP-REAL3} [\textit{W. Leonczuk} and \textit{K. Prazmowski}, ``A construction of analytical projective space'', Formaliz. Math. 1, No. 4, 761--766 (1990)] (where \texttt{TOP-REAL3} is a metric space defined in [\textit{A. Darmochwał}, ``The Euclidean space'', ibid. 2, No. 4, 599--603 (1991)]) satisfies the Pappus's axiom defined in [``Projective spaces. I'', ibid. 1, No. 4, 767--776 (1990)] by \textit{W. Leończuk} and \textit{K. Prażmowski}. \textit{E. Kusak} and \textit{W. Leończuk} formalized the Hessenberg theorem early in the MML [``Hessenberg theorem'', ibid. 2, No. 2, 217--219 (1991)]. With this result, the real projective plane is Desarguesian.
For proving the Pappus's theorem, two different proofs are given. First, we use the techniques developed in the section ``Projective proofs of Pappus's theorem'' in the first chapter of [\textit{J. Richter-Gebert}, Perspectives on projective geometry. A guided tour through real and complex geometry. Berlin: Springer (2011; Zbl 1214.51001)]. Secondly, Pascal's theorem [\textit{R. Coghetto}, Formaliz. Math. 25, No. 2, 107--119 (2017; Zbl 1377.51003)] is used.
In both cases, to prove some lemmas, we use \texttt{Prover9}, the successor of the \texttt{Otter} prover and \texttt{ott2miz} by Josef Urban [\textit{P. Rudnicki} and \textit{J. Urban}, ``Escape to ATP for Mizar'', in: Proceedings of the first international workshop on proof eXchange for theorem proving, PxTP 2011. 46--59 (2011); \textit{A. Grabowski}, Lect. Notes Comput. Sci. 3839, 138--153 (2006; Zbl 1172.03309); J. Autom. Reasoning 55, No. 3, 211--221 (2015; Zbl 1356.68189)].
In \texttt{Coq}, the Pappus's theorem is proved as the application of Grassmann-Cayley algebra [\textit{L. Fuchs} and \textit{L. Théry}, Lect. Notes Comput. Sci. 6877, 51--67 (2011; Zbl 1350.68233)] and more recently in Tarski's geometry [\textit{G. Braun} and \textit{J. Narboux}, J. Autom. Reasoning 58, No. 2, 209--230 (2017; Zbl 1405.03034)].Algorithm NextFit for the bin packing problemhttps://www.zbmath.org/1483.684882022-05-16T20:40:13.078697Z"Fujiwara, Hiroshi"https://www.zbmath.org/authors/?q=ai:fujiwara.hiroshi"Adachi, Ryota"https://www.zbmath.org/authors/?q=ai:adachi.ryota"Yamamoto, Hiroaki"https://www.zbmath.org/authors/?q=ai:yamamoto.hiroakiSummary: The bin packing problem is a fundamental and important optimization problem in theoretical computer science [\textit{M. R. Garey} and \textit{D. S. Johnson}, Computers and intractability. A guide to the theory of NP-completeness. San Francisco, CA: W. H. Freeman and Company (1979; Zbl 0411.68039); \textit{B. Korte} and \textit{J. Vygen}, Combinatorial optimization. Theory and algorithms. 5th ed. Berlin: Springer (2012; Zbl 1237.90001)]. An instance is a sequence of items, each being of positive size at most one. The task is to place all the items into bins so that the total size of items in each bin is at most one and the number of bins that contain at least one item is minimum.
Approximation algorithms have been intensively studied. Algorithm NextFit would be the simplest one. The algorithm repeatedly does the following: If the first unprocessed item in the sequence can be placed, in terms of size, additionally to the bin into which the algorithm has placed an item the last time, place the item into that bin; otherwise place the item into an empty bin. \textit{D. S. Johnson} [Near-optimal bin packing algorithms. Cambridge, MA: Massachusetts Institute of Technology (MIT) (PhD Thesis) (1973)] proved that the number of the resulting bins by algorithm NextFit is less than twice the number of the fewest bins that are needed to contain all items.
In this article, we formalize in Mizar [\textit{G. Bancerek} et al., Lect. Notes Comput. Sci. 9150, 261--279 (2015; Zbl 1417.68201); J. Autom. Reasoning 61, No. 1--4, 9--32 (2018; Zbl 1433.68530)] the bin packing problem as follows: An instance is a sequence of positive real numbers that are each at most one. The task is to find a function that maps the indices of the sequence to positive integers such that the sum of the subsequence for each of the inverse images is at most one and the size of the image is minimum. We then formalize algorithm NextFit, its feasibility, its approximation guarantee, and the tightness of the approximation guarantee.Formalizing basic quaternionic analysishttps://www.zbmath.org/1483.684892022-05-16T20:40:13.078697Z"Gabrielli, Andrea"https://www.zbmath.org/authors/?q=ai:gabrielli.andrea"Maggesi, Marco"https://www.zbmath.org/authors/?q=ai:maggesi.marcoSummary: We present a computer formalization of quaternions in the HOL Light theorem prover. We give an introduction to our library for potential users and we discuss some implementation choices.
As an application, we formalize some basic parts of two recently developed mathematical theories, namely, slice regular functions and Pythagorean-hodograph curves.
For the entire collection see [Zbl 1369.68009].Automating formalization by statistical and semantic parsing of mathematicshttps://www.zbmath.org/1483.684902022-05-16T20:40:13.078697Z"Kaliszyk, Cezary"https://www.zbmath.org/authors/?q=ai:kaliszyk.cezary"Urban, Josef"https://www.zbmath.org/authors/?q=ai:urban.josef"Vyskočil, Jiří"https://www.zbmath.org/authors/?q=ai:vyskocil.jiriSummary: We discuss the progress in our project which aims to automate formalization by combining natural language processing with deep semantic understanding of mathematical expressions. We introduce the overall motivation and ideas behind this project, and then propose a context-based parsing approach that combines efficient statistical learning of deep parse trees with their semantic pruning by type checking and large-theory automated theorem proving. We show that our learning method allows efficient use of large amount of contextual information, which in turn significantly boosts the precision of the statistical parsing and also makes it more efficient. This leads to a large improvement of our first results in parsing theorems from the Flyspeck corpus.
For the entire collection see [Zbl 1369.68009].Some properties of membership functions composed of triangle functions and piecewise linear functionshttps://www.zbmath.org/1483.684912022-05-16T20:40:13.078697Z"Mitsuishi, Takashi"https://www.zbmath.org/authors/?q=ai:mitsuishi.takashiSummary: IF-THEN rules in fuzzy inference is composed of multiple fuzzy sets (membership functions). IF-THEN rules can therefore be considered as a pair of membership functions [\textit{E. H. Mamdani}, ``Application of fuzzy algorithms for control of simple dynamic plant'', IEE Proc. 121, No. 12, 1585--1588 (1974; \url{doi:10.1049/piee.1974.0328})]. The evaluation function of fuzzy control is composite function with fuzzy approximate reasoning and is functional on the set of membership functions. We obtained continuity of the evaluation function and compactness of the set of membership functions [\textit{T. Mitsuishi} et al., ``Continuity of defuzzificat