Learning directed acyclic graph (DAG) that describes the causality of observed data is a very challenging but important task. Due to the limited quantity and quality of observed data, and non-identifiability of causal graph, it is almost impossible to infer a single precise DAG. Some methods approximate the posterior distribution of DAGs to explore the DAG space via Markov chain Monte Carlo (MCMC), but the DAG space is over the nature of super-exponential growth, accurately characterizing the whole distribution over DAGs is very intractable. In this paper, we propose {Reinforcement Causal Structure Learning on Order Graph} (RCL-OG) that uses order graph instead of MCMC to model different DAG topological orderings and to reduce the problem size. RCL-OG first defines reinforcement learning with a new reward mechanism to approximate the posterior distribution of orderings in an efficacy way, and uses deep Q-learning to update and transfer rewards between nodes. Next, it obtains the probability transition model of nodes on order graph, and computes the posterior probability of different orderings. In this way, we can sample on this model to obtain the ordering with high probability. Experiments on synthetic and benchmark datasets show that RCL-OG provides accurate posterior probability approximation and achieves better results than competitive causal discovery algorithms.
翻译:描述观测数据的因果关系的学习方向自行车图(DAG)是一个非常艰巨但重要的任务。由于观测数据的数量和质量有限,而且因果图无法识别,因此几乎不可能推断出一个单一精确的DAG。有些方法接近DAG通过Markov连锁 Monte Carlo(MCMC)探索DAG空间的后座分布,但DAG空间是超富裕增长的性质,准确描述DAG的所有分布是十分棘手的。在本文中,我们建议使用命令图而不是MCMC来模拟不同的DAG表层秩序和缩小问题大小的顺序图(RCL-OG)。RCLOG首先用一个新的奖赏机制来界定加固学习,以便以效率方式估计订单的外座分布,并使用深入的Q学习来更新和转移节点之间的奖赏。接着,我们获得了订单图上的概率转换模型,并用MRCL-O模型比较精确的概率来计算不同排序结果。