Poster
Causal Structure Recovery with Latent Variables under Milder Distributional and Graphical Assumptions
Xiuchuan Li · Kun Zhang · Tongliang Liu
Halle B
Most traditional causal discovery approaches typically assume the absence of latent variables, a simplification that often does not align with real-world situations. Recently, there has been a surge of causal discovery methods that explicitly consider latent variables. While causal discovery with latent variables aims to reveal causal relations between observed variables in the presence of latent variables, latent causal structure learning seeks to identify latent variables and infer their causal relations, typically entailing strong distributional and graphical assumptions. In this paper, we endeavor to recover the whole causal structure involving both latent and observed variables under relatively milder assumptions. Specifically, we introduce two sets of assumptions, one allows arbitrary distribution and requires only one pure child per latent variable, the other requires no pure child and imposes the non-Gaussianity requirement on only a small subset of variables, and both of them allow causal edges between observed variables. Under either of them, we prove that the whole causal structure of linear latent variable models is identifiable. Our proof is constructive, leading to both theoretically sound and computationally efficient algorithms, which first identify latent variables from observed data and then infer causal relations between any two variables.