Fine-tuning pre-trained models have achieved impressive performance on standard natural language processing benchmarks. However, the resultant model generalizability remains poorly understood. We do not know, for example, how excellent performance can lead to the perfection of generalization models. In this study, we analyze a fine-tuned BERT model from different perspectives using relation extraction. We also characterize the differences in generalization techniques according to our proposed improvements. From empirical experimentation, we find that BERT suffers a bottleneck in terms of robustness by way of randomizations, adversarial and counterfactual tests, and biases (i.e., selection and semantic). These findings highlight opportunities for future improvements. Our open-sourced testbed DiagnoseRE is available in \url{https://github.com/zjunlp/DiagnoseRE}.
翻译:培训前的微调模型在标准自然语言处理基准方面取得了令人印象深刻的成绩,然而,由此形成的模型普遍性仍然不甚为人知,例如,我们不知道如何出色地表现能够使一般化模型达到完美。在本研究中,我们从不同的角度,利用关系提取,分析了经过微调的BERT模型。我们还根据我们提议的改进,对一般化技术的不同进行了描述。从经验实验中,我们发现BERT通过随机化、对抗性和反事实性测试以及偏见(即选择和语义学),在稳健性方面遭遇了瓶颈。这些发现突显了未来改进的机会。我们的开放源试样 DiagnoseRE可在以下查阅:https://github.com/zjunlp/DiagnoseRE}。