A vast amount of expert and domain knowledge is captured by causal structural priors, yet there has been little research on testing such priors for generalization and data synthesis purposes. We propose a novel model architecture, Causal Structural Hypothesis Testing, that can use nonparametric, structural causal knowledge and approximate a causal model's functional relationships using deep neural networks. We use these architectures for comparing structural priors, akin to hypothesis testing, using a deliberate (non-random) split of training and testing data. Extensive simulations demonstrate the effectiveness of out-of-distribution generalization error as a proxy for causal structural prior hypothesis testing and offers a statistical baseline for interpreting results. We show that the variational version of the architecture, Causal Structural Variational Hypothesis Testing can improve performance in low SNR regimes. Due to the simplicity and low parameter count of the models, practitioners can test and compare structural prior hypotheses on small dataset and use the priors with the best generalization capacity to synthesize much larger, causally-informed datasets. Finally, we validate our methods on a synthetic pendulum dataset, and show a use-case on a real-world trauma surgery ground-level falls dataset.
翻译:大量专家和领域知识被因果结构前科所捕捉,然而,在为一般化和数据综合目的测试此类前科前科时,却很少进行研究。我们提议了一个新型模型结构,即因果结构假设测试,它可以使用非对称、结构性因果知识,并使用深层神经网络来大致了解因果模式的功能关系。我们利用这些结构结构来比较结构前科,类似于假设测试,使用有意(非随机)的培训和测试数据进行分解。广泛的模拟表明分配外一般化错误作为因果结构先前假设测试的替代物的有效性,并为解释结果提供一个统计基线。我们表明,结构的变异版本,即因果结构变化测试,可以改善低神经神经系统系统的性能。由于模型的简单和低参数计,从业人员可以测试和比较小型数据集的先前结构假设,并使用前科的最佳概括能力来综合大得多、因果性知识的数据集。最后,我们验证了我们在合成前科前科数据库中采用的方法,并展示了真实的创伤性创伤。