This paper is a write-up for the tutorial on "Fine-grained Interpretation and Causation Analysis in Deep NLP Models" that we are presenting at NAACL 2021. We present and discuss the research work on interpreting fine-grained components of a model from two perspectives, i) fine-grained interpretation, ii) causation analysis. The former introduces methods to analyze individual neurons and a group of neurons with respect to a language property or a task. The latter studies the role of neurons and input features in explaining decisions made by the model. We also discuss application of neuron analysis such as network manipulation and domain adaptation. Moreover, we present two toolkits namely NeuroX and Captum, that support functionalities discussed in this tutorial.
翻译:本文是我们在2021年NAACL上介绍的“深NLP模型中的精细判读和因果关系分析”的教学论文。我们介绍并讨论从两个角度解释模型精细判读组成部分的研究工作,一个角度是:(一)细判判判;(二)因果关系分析。前者介绍了分析单个神经元和一组神经元的方法,涉及语言属性或任务。后者研究神经元和输入功能在解释模型所作决定中的作用。我们还讨论神经分析的应用,例如网络操纵和域适应。此外,我们还介绍了两个工具包,即NeuroX和Captum,支持本教学中讨论的功能。