There have been many efforts to try to understand what grammatical knowledge (e.g., ability to understand the part of speech of a token) is encoded in large pre-trained language models (LM). This is done through `Edge Probing' (EP) tests: supervised classification tasks to predict the grammatical properties of a span (whether it has a particular part of speech) using only the token representations coming from the LM encoder. However, most NLP applications fine-tune these LM encoders for specific tasks. Here, we ask: if an LM is fine-tuned, does the encoding of linguistic information in it change, as measured by EP tests? Specifically, we focus on the task of Question Answering (QA) and conduct experiments on multiple datasets. We find that EP test results do not change significantly when the fine-tuned model performs well or in adversarial situations where the model is forced to learn wrong correlations. From a similar finding, some recent papers conclude that fine-tuning does not change linguistic knowledge in encoders but they do not provide an explanation. We find that EP models themselves are susceptible to exploiting spurious correlations in the EP datasets. When this dataset bias is corrected, we do see an improvement in the EP test results as expected.
翻译:已经做出许多努力,试图理解语言学知识(例如,理解象征性语言部分的能力)在大型预先培训语言模型中编码的内容(LM)。这是通过“Edge Probing”(EP)测试(EP)完成的:监督分类任务,仅使用LM编码器生成的象征性表达方式来预测一个区域(无论它是否具有特定部分的语音)的语法特性。然而,大多数NLP应用程序对这些 LM 编码器进行微调,用于特定任务。在这里,我们问:如果LM 进行微调,它的语言信息的编码是否按照EP测试衡量而改变?具体地说,我们侧重于问答(QA)的任务,并在多个数据集上进行实验。我们发现,当微调模型运行良好时,或者在模型被迫学习错误的对抗性关系中,EP测试结果不会发生重大变化。从类似的调查结果中,最近的一些论文得出结论:当一个LMO没有改变语言知识,但是它们并没有通过EP测试来改变,而它们本身的编码结果却不会促进对EP进行精确的改进。我们发现一个可靠的数据进行可靠的解释。我们发现,在对EP进行精确的测试。我们发现,在纠正。我们发现,这是一种可靠的数据进行正确的分析结果。我们发现,我们发现,我们发现EPFPBs。