Given the increasingly prominent role NLP models (will) play in our lives, it is important to evaluate models on their alignment with human expectations of how models behave. Using Natural Language Inference (NLI) as a case study, we investigated the extent to which human-generated explanations of models' inference decisions align with how models actually make these decisions. More specifically, we defined two alignment metrics that quantify how well natural language human explanations align with model sensitivity to input words, as measured by integrated gradients. Then, we evaluated six different transformer models (the base and large versions of BERT, RoBERTa and ELECTRA), and found that the BERT-base model has the highest alignment with human-generated explanations, for both alignment metrics. Additionally, the base versions of the models we surveyed tended to have higher alignment with human-generated explanations than their larger counterparts, suggesting that increasing the number model parameters could result in worse alignment with human explanations. Finally, we find that a model's alignment with human explanations is not predicted by the model's accuracy on NLI, suggesting that accuracy and alignment are orthogonal, and both are important ways to evaluate models.
翻译:鉴于NLP模型(将)在我们生活中发挥着日益突出的作用,因此,必须评估模型与人类对模型行为的期望相一致的模型。 使用自然语言推断法(NLI)作为案例研究,我们调查了人类对模型推论决定的解释在多大程度上与模型实际作出这些决定的方式相一致。更具体地说,我们定义了两种调整指标,用综合梯度衡量,将自然语言的人类解释与模型对输入词的敏感度如何相匹配量化。然后,我们评估了六种不同的变异模型(BERT、RoBERTA和ELECTRA的基础和大版本),发现BERT基准模型与人类推算法解释的一致程度最高。此外,我们所调查模型的基础版本与人类推算法的一致程度往往高于其较大的对应方,表明增加数字模型参数可能导致与人类解释更加一致。最后,我们发现模型与人类解释的一致性并不是由模型的准确性预测的,表明精确性和一致性是或高度一致的,两者都是评价模型的重要方法。