Recently, NLP models have achieved remarkable progress across a variety of tasks; however, they have also been criticized for being not robust. Many robustness problems can be attributed to models exploiting spurious correlations, or shortcuts between the training data and the task labels. Most existing work identifies a limited set of task-specific shortcuts via human priors or error analyses, which requires extensive expertise and efforts. In this paper, we aim to automatically identify such spurious correlations in NLP models at scale. We first leverage existing interpretability methods to extract tokens that significantly affect model's decision process from the input text. We then distinguish "genuine" tokens and "spurious" tokens by analyzing model predictions across multiple corpora and further verify them through knowledge-aware perturbations. We show that our proposed method can effectively and efficiently identify a scalable set of "shortcuts", and mitigating these leads to more robust models in multiple applications.
翻译:最近,NLP模式在各种任务中取得了显著进展;然而,它们也因不健全而受到批评。许多稳健性问题可归因于利用培训数据和任务标签之间虚假关联或捷径的模型。大多数现有工作通过人类前科或错误分析确定了一套有限的任务特有捷径,这需要广泛的专业知识和努力。在本文件中,我们的目标是自动识别规模NLP模式中的此类虚假相关关系。我们首先利用现有可解释性方法从输入文本中提取对模式决策过程产生重大影响的标记。我们然后通过分析多个公司之间的模型预测来区分“真实”标志和“净性”标志,并通过知识认知扰动进一步核实这些标志。我们表明,我们拟议的方法能够有效和高效地识别一系列可缩放的“短切”模型,并减轻这些标记导致在多个应用中形成更强大的模型。