Extractive rationales (i.e., subsets of input features) and natural language explanations (NLEs) are two predominant types of explanations for machine learning models. While NLEs can be more comprehensive than extractive rationales, machine-generated NLEs have been shown to fall short in terms of commonsense knowledge. In this paper, we show that commonsense knowledge can act as a bridge between extractive rationales and NLEs, rendering both types of explanations better. We introduce a self-rationalizing framework, called RExC, that (1) extracts rationales as most responsible features for the predictions, (2) expands the extractive rationales using commonsense resources, and (3) selects the best-suited commonsense knowledge to generate NLEs and give the final prediction. Our framework surpasses by a large margin the previous state-of-the-art in generating NLEs across five tasks in both natural language and vision-language understanding. Self-rationalization with commonsense also strongly improves the quality of the extractive rationale and task performances over the previous best performing models that also produce explanations.
翻译:抽取理由(即投入特征的子集)和自然语言解释(NLEs)是机器学习模式的两大主要解释类型。虽然机器产生的NLE可以比采掘理由更加全面,但机器产生的NLE在常识方面却证明不足。在本文中,我们表明,常识知识可以作为采掘理由和国家 LES之间的桥梁,使这两种解释都更好。我们引入了自合理化框架,称为RExC, 该框架(1) 将采掘理由作为预测的最负责任的特征,(2) 利用普通资源扩大采掘理由,(3) 选择最适合的普通智慧知识来产生NLE和作出最后预测。我们的框架大大超越了先前在天然语言和视觉语言理解方面产生NLECs的状态。与共同的自我合理化还大大改进了采掘理由和任务业绩的质量,而以往的最佳执行模式也提供了解释。