Pre-trained LMs have shown impressive performance on downstream NLP tasks, but we have yet to establish a clear understanding of their sophistication when it comes to processing, retaining, and applying information presented in their input. In this paper we tackle a component of this question by examining robustness of models' ability to deploy relevant context information in the face of distracting content. We present models with cloze tasks requiring use of critical context information, and introduce distracting content to test how robustly the models retain and use that critical information for prediction. We also systematically manipulate the nature of these distractors, to shed light on dynamics of models' use of contextual cues. We find that although models appear in simple contexts to make predictions based on understanding and applying relevant facts from prior context, the presence of distracting but irrelevant content has clear impact in confusing model predictions. In particular, models appear particularly susceptible to factors of semantic similarity and word position. The findings are consistent with the conclusion that LM predictions are driven in large part by superficial contextual cues, rather than by robust representations of context meaning.
翻译:培训前的LMS在下游NLP任务方面表现出了令人印象深刻的业绩,但我们尚未明确了解它们在处理、保留和应用其投入中提供的信息时的先进性。在本文件中,我们通过审查模型在面对分散内容时部署相关背景信息的能力的稳健性来应对这一问题的一部分。我们展示了需要使用关键背景信息的凝聚任务模型,并引入了分散内容以测试模型保存和使用这些关键信息进行预测的稳健性。我们还系统地操纵了这些干扰器的性质,以揭示模型使用背景提示的动态。我们发现,虽然模型出现在简单的背景下,根据理解和应用先前背景的相关事实作出预测,但分散但无关的内容的存在显然会对模型预测产生混乱的影响。特别是,模型似乎特别容易受到语义相似和字词性位置因素的影响。这些结果与LM预测在很大程度上是由表面背景提示而不是由对背景含义的有力表述驱动的结论是一致的。