The success of pre-trained contextualized representations has prompted researchers to analyze them for the presence of linguistic information. Indeed, it is natural to assume that these pre-trained representations do encode some level of linguistic knowledge as they have brought about large empirical improvements on a wide variety of NLP tasks, which suggests they are learning true linguistic generalization. In this work, we focus on intrinsic probing, an analysis technique where the goal is not only to identify whether a representation encodes a linguistic attribute but also to pinpoint where this attribute is encoded. We propose a novel latent-variable formulation for constructing intrinsic probes and derive a tractable variational approximation to the log-likelihood. Our results show that our model is versatile and yields tighter mutual information estimates than two intrinsic probes previously proposed in the literature. Finally, we find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.
翻译:培训前背景介绍的成功促使研究人员分析它们是否具备语言信息。事实上,自然可以假定,这些培训前的介绍确实将某种程度的语言知识编码起来,因为这些介绍在广泛的国家语言规划任务方面带来了大量的经验性改进,这表明他们正在学习真正的语言概括性。在这项工作中,我们侧重于内在研究,一种分析技术,其目标不仅在于确定一种说明是否将语言属性编码成一种语言属性,而且还要确定该属性的编码位置。我们提出了一种新的潜在可变公式,用于构建内在探测器,并产生与日志相似的可移动的可变近似性。我们的结果表明,我们的模型是多功能的,并且产生比文献中先前提出的两个内在探讨更紧密的相互信息估计。最后,我们发现经验证据表明,经过培训的介绍形成了一种跨语言的变形法概念。