In interpretable NLP, we require faithful rationales that reflect the model's decision-making process for an explained instance. While prior work focuses on extractive rationales (a subset of the input words), we investigate their less-studied counterpart: free-text natural language rationales. We demonstrate that pipelines, existing models for faithful extractive rationalization on information-extraction style tasks, do not extend as reliably to "reasoning" tasks requiring free-text rationales. We turn to models that jointly predict and rationalize, a class of widely used high-performance models for free-text rationalization whose faithfulness is not yet established. We define label-rationale association as a necessary property for faithfulness: the internal mechanisms of the model producing the label and the rationale must be meaningfully correlated. We propose two measurements to test this property: robustness equivalence and feature importance agreement. We find that state-of-the-art T5-based joint models exhibit both properties for rationalizing commonsense question-answering and natural language inference, indicating their potential for producing faithful free-text rationales.
翻译:在可解释的NLP中,我们需要一个解释实例的忠实理由,反映模型的决策过程。虽然先前的工作侧重于采掘理由(投入词的子集),但我们要调查其研究较少的对应方:自由文本自然语言理由。我们证明管道,即现有关于信息输出风格任务忠实采掘合理化的模式,不可靠地扩大到需要自由文本理由的“合理”任务。我们转向共同预测和合理化的模型,即广泛使用的高性能模型,自由文本合理化,其忠诚尚未确立。我们把标签-分类协会定义为忠实的必要属性:产生标签的模型的内部机制,其理由必须具有有意义的关联性。我们建议检验这种属性的两种衡量方法:稳健的等同性和特征重要性协议。我们发现,基于现状的T5联合模型既具有合理性,又具有理顺常识解问题和自然语言推论的特性,表明其产生忠实自由文本理由的潜力。