Explanation is important for text classification tasks. One prevalent type of explanation is rationales, which are text snippets of input text that suffice to yield the prediction and are meaningful to humans. A lot of research on rationalization has been based on the selective rationalization framework, which has recently been shown to be problematic due to the interlocking dynamics. In this paper, we show that we address the interlocking problem in the multi-aspect setting, where we aim to generate multiple rationales for multiple outputs. More specifically, we propose a multi-stage training method incorporating an additional self-supervised contrastive loss that helps to generate more semantically diverse rationales. Empirical results on the beer review dataset show that our method improves significantly the rationalization performance.
翻译:解释对于文本分类任务很重要。 一种常见的解释类型是理由,即投入文本的文字片段,足以作出预测,对人类有意义。关于合理化的许多研究是以选择性合理化框架为基础的,最近由于相互交错的动态关系而显示出问题。在本文中,我们表明我们解决了多层环境中的相互交错的问题,我们的目标是为多种产出提出多种理由。更具体地说,我们建议采用多阶段培训方法,增加一个自我监督的对比性损失,帮助产生更精细的不同理由。 啤酒审查数据集的经验性结果显示,我们的方法大大改进了合理化的绩效。