Natural Language Processing (NLP) models have become increasingly more complex and widespread. With recent developments in neural networks, a growing concern is whether it is responsible to use these models. Concerns such as safety and ethics can be partially addressed by providing explanations. Furthermore, when models do fail, providing explanations is paramount for accountability purposes. To this end, interpretability serves to provide these explanations in terms that are understandable to humans. Central to what is understandable is how explanations are communicated. Therefore, this survey provides a categorization of how recent interpretability methods communicate explanations and discusses the methods in depth. Furthermore, the survey focuses on post-hoc methods, which provide explanations after a model is learned and generally model-agnostic. A common concern for this class of methods is whether they accurately reflect the model. Hence, how these post-hoc methods are evaluated is discussed throughout the paper.
翻译:自然语言处理(NLP)模式变得越来越复杂和广泛。随着神经网络的最新发展,人们越来越担心的是是否应该使用这些模式。安全和伦理等关注问题可以通过提供解释来部分解决。此外,当模型失败时,提供解释对于问责目的至关重要。为此,可解释性有助于以人类可以理解的术语提供这些解释。解释的可理解性的核心在于如何传达解释。因此,本调查对最新的可解释性方法如何传达解释解释和如何深入讨论方法进行了分类。此外,调查还侧重于热后方法,在了解模型后提供解释,而且一般是模型不可知性。对这一类方法的共同关注是它们是否准确地反映模型。因此,整个文件都讨论了这些后热方法如何评价。