Recent advances in deep learning have improved the performance of many Natural Language Processing (NLP) tasks such as translation, question-answering, and text classification. However, this improvement comes at the expense of model explainability. Black-box models make it difficult to understand the internals of a system and the process it takes to arrive at an output. Numerical (LIME, Shapley) and visualization (saliency heatmap) explainability techniques are helpful; however, they are insufficient because they require specialized knowledge. These factors led rationalization to emerge as a more accessible explainable technique in NLP. Rationalization justifies a model's output by providing a natural language explanation (rationale). Recent improvements in natural language generation have made rationalization an attractive technique because it is intuitive, human-comprehensible, and accessible to non-technical users. Since rationalization is a relatively new field, it is disorganized. As the first survey, rationalization literature in NLP from 2007-2022 is analyzed. This survey presents available methods, explainable evaluations, code, and datasets used across various NLP tasks that use rationalization. Further, a new subfield in Explainable AI (XAI), namely, Rational AI (RAI), is introduced to advance the current state of rationalization. A discussion on observed insights, challenges, and future directions is provided to point to promising research opportunities.
翻译:最近深层次学习的进展改善了许多自然语言处理(NLP)任务的绩效,如翻译、问答和文本分类等。然而,这一改进是以模型解释的失败为代价的。黑盒模型使得难以理解一个系统的内部和它要达到产出的过程。数字(LIME, Shaply)和可视化(Saliency heatmap)解释技术是有用的;但是,这些解释技术由于需要专业知识而不够充分。这些因素导致合理化,成为国家语言处理(NLP)中更便于理解的可解释技术。合理化通过提供自然语言解释(albale)来证明模型的产出。自然语言生成的近期改进使合理化技术变得有吸引力,因为它直观、人能理解,而且非技术用户可以使用。由于合理化是一个相对较新的领域,因此它没有条理化。对2007-201222年国家语言处理的合理化文献进行了分析。本次调查提供了可用的方法、可解释的评价、代码和数据集,用于使用合理化的各种国家语言解释的任务。此外,自然语言生成的一种具有吸引力的技术,即当前可解释性、可解释性、可解释性、可理解的AIAA(即向未来方向),提供了可解释性、可解释性、可解释性、可解释性、可了解的AIAX的机会。