Developing explainability methods for Natural Language Processing (NLP) models is a challenging task, for two main reasons. First, the high dimensionality of the data (large number of tokens) results in low coverage and in turn small contributions for the top tokens, compared to the overall model performance. Second, owing to their textual nature, the input variables, after appropriate transformations, are effectively binary (presence or absence of a token in an observation), making the input-output relationship difficult to understand. Common NLP interpretation techniques do not have flexibility in resolution, because they usually operate at word-level and provide fully local (message level) or fully global (over all messages) summaries. The goal of this paper is to create more flexible model explainability summaries by segments of observation or clusters of words that are semantically related to each other. In addition, we introduce a root cause analysis method for NLP models, by analyzing representative False Positive and False Negative examples from different segments. At the end, we illustrate, using a Yelp review data set with three segments (Restaurant, Hotel, and Beauty), that exploiting group/cluster structures in words and/or messages can aid in the interpretation of decisions made by NLP models and can be utilized to assess the model's sensitivity or bias towards gender, syntax, and word meanings.
翻译:开发自然语言处理模型(NLP)的可解释性方法是一项具有挑战性的任务,原因有二。 首先,数据(数量众多的象征性)的高维度(数量众多的象征性)导致覆盖率低,相对于总体模型性效绩而言,对顶级标牌的贡献也较小。 其次,由于其文字性质,输入变量在适当转换后实际上是二进制的(在观察中存在或没有象征性的),使输入-产出关系难以理解。通用的NLP解释技术在解析方面没有灵活性,因为它们通常在字级上运作,提供完全本地(消息级别)或完全全球(所有信息)摘要。本文件的目标是通过观察或相互有定义的词组组合来创建更灵活的可解释性模型摘要。此外,我们为NLP模型引入了一种根本原因分析方法,分析不同部分具有代表性的虚假和虚假的负面模型实例。最后,我们用Yelp审查数据集成三个部分(Restaurant、旅馆和美容),用NAsmaly 定义的文字和性别定义解释结构,可以将Asum/tystrismal 用于语言/L 的文字和性别定义的文字和文字结构的文字/解释。</s>