Conventional methods for causal structure learning from data face significant challenges due to combinatorial search space. Recently, the problem has been formulated into a continuous optimization framework with an acyclicity constraint to learn Directed Acyclic Graphs (DAGs). Such a framework allows the utilization of deep generative models for causal structure learning to better capture the relations between data sample distributions and DAGs. However, so far no study has experimented with the use of Wasserstein distance in the context of causal structure learning. Our model named DAG-WGAN combines the Wasserstein-based adversarial loss with an acyclicity constraint in an auto-encoder architecture. It simultaneously learns causal structures while improving its data generation capability. We compare the performance of DAG-WGAN with other models that do not involve the Wasserstein metric in order to identify its contribution to causal structure learning. Our model performs better with high cardinality data according to our experiments.
翻译:从数据中学习因果结构的常规方法由于组合式搜索空间而面临重大挑战。最近,这个问题已发展成一个连续优化框架,其周期性制约是学习定向环形图(DAGs),这一框架允许利用深层次的遗传模型来学习因果结构,以更好地捕捉数据样本分布和DAGs之间的关系。然而,迄今为止,还没有一项研究在因果结构学习中使用瓦塞斯坦距离的实验。我们称为DAG-WGAN的模型将基于瓦西斯坦的对抗性损失与自动编码结构中的周期性制约结合起来。它同时学习因果结构,同时改进其数据生成能力。我们比较了DAG-WGAN与其他模型的性能,这些模型不涉及瓦塞斯坦指数,以便确定其对因果结构学习的贡献。根据我们的实验,我们的模型在高基点数据方面表现更好。