Targeted syntactic evaluations of language models ask whether models show stable preferences for syntactically acceptable content over minimal-pair unacceptable inputs. Most targeted syntactic evaluation datasets ask models to make these judgements with just a single context-free sentence as input. This does not match language models' training regime, in which input sentences are always highly contextualized by the surrounding corpus. This mismatch raises an important question: how robust are models' syntactic judgements in different contexts? In this paper, we investigate the stability of language models' performance on targeted syntactic evaluations as we vary properties of the input context: the length of the context, the types of syntactic phenomena it contains, and whether or not there are violations of grammaticality. We find that model judgements are generally robust when placed in randomly sampled linguistic contexts. However, they are substantially unstable for contexts containing syntactic structures matching those in the critical test content. Among all tested models (GPT-2 and five variants of OPT), we significantly improve models' judgements by providing contexts with matching syntactic structures, and conversely significantly worsen them using unacceptable contexts with matching but violated syntactic structures. This effect is amplified by the length of the context, except for unrelated inputs. We show that these changes in model performance are not explainable by simple features matching the context and the test inputs, such as lexical overlap and dependency overlap. This sensitivity to highly specific syntactic features of the context can only be explained by the models' implicit in-context learning abilities.
翻译:对语言模型的定向综合评价询问,模型是否显示对最小不可以接受的投入的可接受综合内容有稳定的偏好。大多数目标综合评价数据集要求模型作出这些判断,只使用单一的不上下文的句子作为输入。这与语言模型的培训制度不符,在这种制度中,输入的句子总是被周围的语体高度关联化。这种不匹配提出了一个重要问题:不同背景下模型的综合判断的力度如何?在本文中,我们调查语言模型在目标的敏感度评价方面的性能的稳定性,因为我们对输入背景的特性有差异:上下文的长度,它包含的合成现象的类型,以及是否存在违反文体的特征。我们发现,在随机抽样的语言环境中,输入的句句通常很强。然而,对于包含与关键测试内容相匹配的合成结构(GPT-2和5种方位变体),我们通过提供与合成结构相匹配的背景,大大改进了模型的逻辑判断力。在这种不上下文中,我们只能通过不精确的进度解释性环境来大大改进模型,而通过不精确的进化的进度结构来解释。我们无法理解的进度的进度结构的进度结构来解释这些进度的进度的进度的进度的进度的进度的进度的进度的进度结构。我们通过这些进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度结构,只能通过这些进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度是放大的进度的进度的进度的进度的进度的进度的进度的进度只能通过不进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度是放大的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度的进度,