Given the increasingly prominent role NLP models (will) play in our lives, it is important for human expectations of model behavior to align with actual model behavior. Using Natural Language Inference (NLI) as a case study, we investigate the extent to which human-generated explanations of models' inference decisions align with how models actually make these decisions. More specifically, we define three alignment metrics that quantify how well natural language explanations align with model sensitivity to input words, as measured by integrated gradients. Then, we evaluate eight different models (the base and large versions of BERT, RoBERTa and ELECTRA, as well as anRNN and bag-of-words model), and find that the BERT-base model has the highest alignment with human-generated explanations, for all alignment metrics. Focusing in on transformers, we find that the base versions tend to have higher alignment with human-generated explanations than their larger counterparts, suggesting that increasing the number of model parameters leads, in some cases, to worse alignment with human explanations. Finally, we find that a model's alignment with human explanations is not predicted by the model's accuracy, suggesting that accuracy and alignment are complementary ways to evaluate models.
翻译:鉴于NLP模型(将)在我们生活中发挥越来越显著的作用,人类对模型行为的期望与实际模型行为相一致非常重要。使用自然语言推断法(NLI)作为案例研究,我们调查人类对模型推理决定的解释与模型实际作出这些决定的方式相一致的程度。更具体地说,我们界定了三种对准指标,以量化自然语言解释与模型对输入词的敏感性如何一致,以综合梯度来衡量。然后,我们评估了八种不同的模型(BERT、RoBERTA和ELECTRA的基础和大版本,以及ANNNN和字包模型),并发现BERT基准模型与人类推算解释的高度一致,而所有对准指标都与人类推算法相一致。我们以变异器为重点,发现基版本与人类推算法往往比较大的变异器更加一致,表明在某些情况下,增加模型参数的数量导致与人类解释更差。最后,我们发现模型与人类解释的一致不是由模型的准确性来预测的,建议精确性和补充方法。