While pretrained language models achieve excellent performance on natural language understanding benchmarks, they tend to rely on spurious correlations and generalize poorly to out-of-distribution (OOD) data. Recent work has explored using counterfactually-augmented data (CAD) -- data generated by minimally perturbing examples to flip the ground-truth label -- to identify robust features that are invariant under distribution shift. However, empirical results using CAD for OOD generalization have been mixed. To explain this discrepancy, we draw insights from a linear Gaussian model and demonstrate the pitfalls of CAD. Specifically, we show that (a) while CAD is effective at identifying robust features, it may prevent the model from learning unperturbed robust features; and (b) CAD may exacerbate existing spurious correlations in the data. On two crowdsourced CAD datasets, our results show that the lack of perturbation diversity limits their effectiveness on OOD generalization, calling for innovative crowdsourcing procedures to elicit diverse perturbation of examples.
翻译:虽然经过培训的语言模型在自然语言理解基准方面取得了出色的业绩,但它们往往依赖虚假的关联性,对分布(OOD)数据一般化差强人意。最近的工作探索了使用反事实强化数据(CAD) -- -- 极小扰动实例生成的数据,以翻转地面真相标签 -- -- 以查明在分布转移中变化不定的稳健特征。然而,使用CAD进行OOD一般化的经验结果好坏参半。为了解释这一差异,我们从线性高斯模型中提取了洞察力,并展示了CAD的陷阱。具体地说,我们表明:(a) 虽然CAD在确定强健特征方面是有效的,但它可能防止模型学习不受干扰的强健特征;以及(b) CAD可能加剧数据中现有的虚假关联性。关于两个众源的CAD数据集,我们的结果显示,缺乏扰动多样性限制了OD一般化的效果,我们呼吁采用创新的众包程序来了解各种实例。