Neural network models have achieved state-of-the-art performances in a wide range of natural language processing (NLP) tasks. However, a long-standing criticism against neural network models is the lack of interpretability, which not only reduces the reliability of neural NLP systems but also limits the scope of their applications in areas where interpretability is essential (e.g., health care applications). In response, the increasing interest in interpreting neural NLP models has spurred a diverse array of interpretation methods over recent years. In this survey, we provide a comprehensive review of various interpretation methods for neural models in NLP. We first stretch out a high-level taxonomy for interpretation methods in NLP, i.e., training-based approaches, test-based approaches, and hybrid approaches. Next, we describe sub-categories in each category in detail, e.g., influence-function based methods, KNN-based methods, attention-based models, saliency-based methods, perturbation-based methods, etc. We point out deficiencies of current methods and suggest some avenues for future research.
翻译:神经网络模型在广泛的自然语言处理(NLP)任务中取得了最先进的表现,然而,对神经网络模型的长期批评是缺乏解释性,这不仅降低了神经网络系统的可靠性,而且限制了这些模型在解释性至关重要的领域(例如保健应用)的应用范围。作为回应,近年来对解释神经网络模型的兴趣日益浓厚,激发了多种解释方法。在本次调查中,我们全面审查了NLP神经模型的各种解释方法。我们首先对NLP的解释方法,即基于培训的方法、基于测试的方法和混合方法,进行了高层次的分类。接下来,我们详细描述了每一类的子类别,例如基于影响-功能的方法、基于注意的方法、基于关注的模式、基于显著方法、基于扰动的方法等等。我们指出目前方法的缺陷,并提出今后研究的一些途径。