Pellet, Jean-Philippe; Elisseeff, AndrĂ© Using Markov blankets for causal structure learning. (English) Zbl 1225.68205 J. Mach. Learn. Res. 9, 1295-1342 (2008). Summary: We show how a generic feature-selection algorithm returning strongly relevant variables can be turned into a causal structure-learning algorithm. We prove this under the faithfulness assumption for the data distribution. In a causal graph, the strongly relevant variables for a node \(X\) are its parents, children, and children’s parents (or spouses), also known as the Markov blanket of \(X\). Identifying the spouses leads to the detection of the V-structure patterns and thus to causal orientations. Repeating the task for all variables yields a valid partially oriented causal graph. We first show an efficient way to identify the spouse links. We then perform several experiments in the continuous domain using the Recursive Feature Elimination feature-selection algorithm with Support Vector Regression and empirically verify the intuition of this direct (but computationally expensive) approach. Within the same framework, we then devise a fast and consistent algorithm, Total Conditioning (TC), and a variant, \(\text{TC}_{\text{bw}}\), with an explicit backward feature-selection heuristics, for Gaussian data. After running a series of comparative experiments on five artificial networks, we argue that Markov blanket algorithms such as \(\text{TC}/\text{TC}_{\text{bw}}\) or Grow-Shrink scale better than the reference PC algorithm and provides higher structural accuracy. Cited in 9 Documents MSC: 68T05 Learning and adaptive systems in artificial intelligence Keywords:causal structure learning; feature selection; Markov blanket; partial correlation; statistical test of conditional independence PDF BibTeX XML Cite \textit{J.-P. Pellet} and \textit{A. Elisseeff}, J. Mach. Learn. Res. 9, 1295--1342 (2008; Zbl 1225.68205) Full Text: Link