Neural predictive models have achieved remarkable performance improvements in various natural language processing tasks. However, most neural predictive models suffer from the lack of explainability of predictions, limiting their practical utility. This paper proposes a neural predictive approach to make a prediction and generate its corresponding explanation simultaneously. It leverages the knowledge entailed in explanations as an additional distillation signal for more efficient learning. We conduct a preliminary study on Chinese medical multiple-choice question answering, English natural language inference, and commonsense question answering tasks. The experimental results show that the proposed approach can generate reasonable explanations for its predictions even with a small-scale training corpus. The proposed method also achieves improved prediction accuracy on three datasets, which indicates that making predictions can benefit from generating the explanation in the decision process.
翻译:神经预测模型在各种自然语言处理任务中取得了显著的绩效改进,然而,大多数神经预测模型缺乏预测的解释性,限制了预测的实际效用。本文件提出一种神经预测方法,以同时作出预测并作出相应的解释。它利用解释中所包含的知识作为补充蒸馏信号,提高学习效率。我们对中国医学多选题回答、英语自然语言推论和常识问题回答任务进行了初步研究。实验结果显示,即使有小规模的培训资料,拟议的方法也能为其预测提供合理的解释。拟议的方法还提高了三个数据集的预测准确性,表明作出预测可以得益于在决策过程中作出解释。