Large pre-trained language models have shown remarkable performance over the past few years. These models, however, sometimes learn superficial features from the dataset and cannot generalize to the distributions that are dissimilar to the training scenario. There have been several approaches proposed to reduce model's reliance on these bias features which can improve model robustness in the out-of-distribution setting. However, existing methods usually use a fixed low-capacity model to deal with various bias features, which ignore the learnability of those features. In this paper, we analyze a set of existing bias features and demonstrate there is no single model that works best for all the cases. We further show that by choosing an appropriate bias model, we can obtain a better robustness result than baselines with a more sophisticated model design.
翻译:过去几年来,经过培训的大型语言模型表现出了显著的成绩,但这些模型有时从数据集中学习表面特征,无法概括到与培训设想不同的分布。为了减少模型对这些偏差特征的依赖,建议了几种办法,这些偏差特征可以改善分配外环境中的模式稳健性。然而,现有方法通常使用固定的低能力模型来处理各种偏差特征,这些特征忽视了这些特征的可学习性。在本文件中,我们分析了一套现有的偏差特征,并表明没有一种单一模型对所有案例都最适用。我们进一步表明,通过选择适当的偏差模式,我们可以比使用更复杂的模型设计基准获得更好的稳健性结果。