Learning models that offer robust out-of-distribution generalization and fast adaptation is a key challenge in modern machine learning. Modelling causal structure into neural networks holds the promise to accomplish robust zero and few-shot adaptation. Recent advances in differentiable causal discovery have proposed to factorize the data generating process into a set of modules, i.e. one module for the conditional distribution of every variable where only causal parents are used as predictors. Such a modular decomposition of knowledge enables adaptation to distributions shifts by only updating a subset of parameters. In this work, we systematically study the generalization and adaption performance of such modular neural causal models by comparing it to monolithic models and structured models where the set of predictors is not constrained to causal parents. Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes and offer robust generalization. We also found that the effects are more significant for sparser graphs as compared to denser graphs.
翻译:向神经网络模拟因果结构,有望实现稳健的零和几发适应。在可区别因果发现方面最近的进展建议将数据生成过程纳入一组模块,即每个变量有条件分布的模块,其中只有因果父母才用作预测器。这种知识模块分解使知识能够适应分布变化,只更新一组参数。在这项工作中,我们系统地研究这种模块型神经因果模型的概括和适应性能,将其与单一模型和结构模型进行比较,因为预测器的组合不局限于因果父母。我们的分析表明,模块因果模型在低数据系统中优于零和少发适应的其他模型,提供了稳健的概括化。我们还发现,与密度图形相比,对细微图表的影响更为显著。