Token-level attributions have been extensively studied to explain model predictions for a wide range of classification tasks in NLP (e.g., sentiment analysis), but such explanation techniques are less explored for machine reading comprehension (RC) tasks. Although the transformer-based models used here are identical to those used for classification, the underlying reasoning these models perform is very different and different types of explanations are required. We propose a methodology to evaluate explanations: an explanation should allow us to understand the RC model's high-level behavior with respect to a set of realistic counterfactual input scenarios. We define these counterfactuals for several RC settings, and by connecting explanation techniques' outputs to high-level model behavior, we can evaluate how useful different explanations really are. Our analysis suggests that pairwise explanation techniques are better suited to RC than token-level attributions, which are often unfaithful in the scenarios we consider. We additionally propose an improvement to an attention-based attribution technique, resulting in explanations which better reveal the model's behavior.
翻译:我们广泛研究了Token等级的属性,以解释NLP中一系列广泛的分类任务(例如情绪分析)的模型预测,但这种解释技术在机器阅读(RC)任务中探索得较少。虽然此处使用的变压器模型与分类所用的模型相同,但这些模型所起作用的基本推理非常不同,需要不同的解释。我们提出一种方法来评价解释:解释应使我们能够理解RC模型在一套现实的反事实输入情景方面的高层次行为。我们为若干RC设置界定了这些反事实,并通过将解释技术的输出与高级模型行为联系起来,我们可以评估不同解释的实际作用。我们的分析表明,配对解释技术更适合RC的代号属性,而这种代号属性在我们所考虑的假设中往往不切实际。我们还建议改进基于关注的归因技术,从而更好地说明模型的行为。