While pretrained language models achieve excellent performance on natural language understanding benchmarks, they tend to rely on spurious correlations and generalize poorly to out-of-distribution (OOD) data. Recent work has explored using counterfactually-augmented data (CAD) -- data generated by minimally perturbing examples to flip the ground-truth label -- to identify robust features that are invariant under distribution shift. However, empirical results using CAD for OOD generalization have been mixed. To explain this discrepancy, we draw insights from a linear Gaussian model and demonstrate the pitfalls of CAD. Specifically, we show that (a) while CAD is effective at identifying robust features, it may prevent the model from learning unperturbed robust features, and (b) CAD may exacerbate existing spurious correlations in the data. Our results show that the lack of perturbation diversity in current CAD datasets limits its effectiveness on OOD generalization, calling for innovative crowdsourcing procedures to elicit diverse perturbation of examples.
翻译:虽然经过培训的语言模型在自然语言理解基准方面取得了出色的业绩,但它们往往依赖虚假的关联性,对分布(OOD)数据过于笼统。最近的工作探索了使用反事实强化数据(CAD) -- -- 极小扰动实例生成的数据,以翻转地面真相标签 -- -- 以查明在分布转移中变化不定的强项特征。然而,使用 CAD对 OOOD 概括化使用的经验性结果好坏参半。为了解释这一差异,我们从线性高斯模型中提取了洞察力,并展示了CAD的陷阱。具体地说,我们表明:(a) 虽然CAD在识别强项特征方面是有效的,但它可能防止模型学习不受干扰的强项特征,以及(b) CAD可能加剧数据中现有的虚假关联性。我们的结果显示,目前CAD数据集缺乏扰动多样性,限制了其在OOD一般化方面的效力,我们呼吁采用创新的众承包程序,以了解各种实例。