Despite machine learning models' success in Natural Language Processing (NLP) tasks, predictions from these models frequently fail on out-of-distribution (OOD) samples. Prior works have focused on developing state-of-the-art methods for detecting OOD. The fundamental question of how OOD samples differ from in-distribution samples remains unanswered. This paper explores how data dynamics in training models can be used to understand the fundamental differences between OOD and in-distribution samples in extensive detail. We found that syntactic characteristics of the data samples that the model consistently predicts incorrectly in both OOD and in-distribution cases directly contradict each other. In addition, we observed preliminary evidence supporting the hypothesis that models are more likely to latch on trivial syntactic heuristics (e.g., overlap of words between two sentences) when making predictions on OOD samples. We hope our preliminary study accelerates the data-centric analysis on various machine learning phenomena.
翻译:尽管机器学习模型在自然语言处理(NLP)任务中取得了成功,但这些模型的预测往往在分配外(OOD)样本中失败。先前的工作重点是开发最先进的探测OOD的方法。OOD样本与分配内样本之间的根本问题仍然没有得到解答。本文探讨了如何利用培训模型的数据动态广泛详细地了解OOD和分配内样本之间的根本差异。我们发现,该模型始终在OOD和分配中预测错误的数据样本的合成特征直接相互矛盾。此外,我们观察到初步证据支持这样的假设,即模型在预测OOOD样本时更有可能利用微小的合成理论超常(例如两句词重叠 ) 。我们希望我们的初步研究能够加速对各种机器学习现象进行以数据为中心的分析。