Structural analysis methods (e.g., probing and feature attribution) are increasingly important tools for neural network analysis. We propose a new structural analysis method grounded in a formal theory of \textit{causal abstraction} that provides rich characterizations of model-internal representations and their roles in input/output behavior. In this method, neural representations are aligned with variables in interpretable causal models, and then \textit{interchange interventions} are used to experimentally verify that the neural representations have the causal properties of their aligned variables. We apply this method in a case study to analyze neural models trained on Multiply Quantified Natural Language Inference (MQNLI) corpus, a highly complex NLI dataset that was constructed with a tree-structured natural logic causal model. We discover that a BERT-based model with state-of-the-art performance successfully realizes the approximate causal structure of the natural logic causal model, whereas a simpler baseline model fails to show any such structure, demonstrating that neural representations encode the compositional structure of MQNLI examples.
翻译:结构分析方法(例如,验证和特征归属)日益成为神经网络分析的重要工具。我们提出一种新的结构分析方法,其基础是正式的理论\ textit{causal commotion},该方法提供了模型内部代表及其在输入/产出行为中的作用的丰富特征。在这种方法中,神经表现与可解释因果模型中的变量相一致,然后是\ textit{interchange interproduction}用于实验性地核实神经表现具有与其对应变量的因果关系。我们在一项案例研究中应用了这种方法来分析在多盘自然语言量化推断(MQNLI)程序上培训的神经模型。MQNLI数据集是一个高度复杂的国家语言分类数据集,是用树结构自然逻辑因果模型构建的。我们发现,基于最新表现的BERT模型成功地实现了自然逻辑因果模型的近似因果结构,而简单的基线模型未能显示任何这种结构,表明用于计算MQNLI示例构成结构的神经表现。