Recently, NLP models have achieved remarkable progress across a variety of tasks; however, they have also been criticized for being not robust. Many robustness problems can be attributed to models exploiting spurious correlations, or shortcuts between the training data and the task labels. Models may fail to generalize to out-of-distribution data or be vulnerable to adversarial attacks if spurious correlations are exploited through the training process. In this paper, we aim to automatically identify such spurious correlations in NLP models at scale. We first leverage existing interpretability methods to extract tokens that significantly affect model's decision process from the input text. We then distinguish "genuine" tokens and "spurious" tokens by analyzing model predictions across multiple corpora and further verify them through knowledge-aware perturbations. We show that our proposed method can effectively and efficiently identify a scalable set of "shortcuts", and mitigating these leads to more robust models in multiple applications.
翻译:最近,NLP模式在各种任务中取得了显著进展;然而,它们也因不健全而受到批评。许多稳健性问题可归因于利用培训数据与任务标签之间虚假关联或捷径的模型。模型可能无法概括分配数据,或者如果通过培训过程利用虚假关联,很容易受到对抗性攻击。在本文件中,我们的目标是自动识别规模NLP模式中的此类虚假相关关系。我们首先利用现有的解释性方法从输入文本中提取对模型决策过程产生重大影响的标记。我们然后通过分析多个公司之间的模型预测来区分“真实”标志和“纯洁”标志,并通过知识认知干扰进一步核实它们。我们表明,我们拟议的方法能够有效和高效地识别可扩展的“短切”数据集,并减轻这些影响导致在多种应用中形成更强有力的模型。