Structural analysis methods (e.g., probing and feature attribution) are increasingly important tools for neural network analysis. We propose a new structural analysis method grounded in a formal theory of causal abstraction that provides rich characterizations of model-internal representations and their roles in input/output behavior. In this method, neural representations are aligned with variables in interpretable causal models, and then interchange interventions are used to experimentally verify that the neural representations have the causal properties of their aligned variables. We apply this method in a case study to analyze neural models trained on Multiply Quantified Natural Language Inference (MQNLI) corpus, a highly complex NLI dataset that was constructed with a tree-structured natural logic causal model. We discover that a BERT-based model with state-of-the-art performance successfully realizes parts of the natural logic model's causal structure, whereas a simpler baseline model fails to show any such structure, demonstrating that BERT representations encode the compositional structure of MQNLI.
翻译:神经结构分析方法(例如,验证和特征归属)日益成为神经网络分析的重要工具。我们提出一种新的结构分析方法,其依据是正式的因果抽象理论,该理论提供了模型内部代表及其在投入/产出行为中的作用的丰富特征。在这个方法中,神经表现与可解释因果模型中的变量相一致,然后使用交换干预实验性地核实神经表现具有其一致变量的因果特性。我们在一项案例研究中采用这种方法,分析在多重量化自然语言推断(MQNLI)体(MQNLI)中培训的神经模型,这是一个高度复杂的国家语言推断数据集,是用树结构自然逻辑因果模型构建的。我们发现,基于最新性表现的BERT模型成功地实现了自然逻辑模型因果结构的某些部分,而一个更简单的基线模型未能显示任何这种结构,表明BERT的表示将MQNLI的构成结构编码。