To explain NLP models, importance measures such as attention inform which inputs tokens are important for a prediction are popular. However, an open question is how well these explanations accurately reflect a model's logic, a property called faithfulness. To answer this question, we propose an new faithfulness benchmark called Recursive ROAR. This works by recursively masking allegedly important tokens and then retrain the model. The principle is, that this should result in worse model performance compared to masking random tokens. The result is a performance curve given a masking-ratio. Furthermore, we propose a summarizing metric using the area-between-curves, which allows for easy comparison across papers, models, and tasks. To provide a thorough review, we evaluate 4 different importance measures on 8 different datasets, using both LSTM-attention models and RoBERTa models. We find that the faithfulness of importance measures is both model-dependent and task-dependent. This conclusion contradicts previous evaluations in both computer vision and faithfulness of attention literature.
翻译:为了解释NLP模型, 关注告知输入符号对预测很重要等重要措施很受欢迎。 但是, 一个未决问题是这些解释如何准确地反映模型的逻辑, 一种被称为忠诚的属性。 为了回答这个问题, 我们建议一个新的忠诚基准, 名为 Recursive 注重成果的年度报告。 这是通过反复隐藏所谓的重要符号, 然后对模型进行再培训。 原则是, 这应该导致比遮盖随机符号更差的模型性能。 结果是一个功能曲线, 给它一个掩码。 此外, 我们建议用区域间曲线来总结衡量尺度, 以便很容易地比较文件、 模型和任务。 为了提供一个彻底的审查, 我们用LSTM- 注意模型和 RoBERTA模型来评估8个不同数据集的4项不同重要措施。 我们发现, 重要措施的忠诚性既取决于模型, 也取决于任务。 这个结论与以前对计算机视野和关注文献的忠诚性的评价相矛盾 。