Are pairs of words that tend to occur together also likely to stand in a linguistic dependency? This empirical question is motivated by a long history of literature in cognitive science, psycholinguistics, and NLP. In this work we contribute an extensive analysis of the relationship between linguistic dependencies and statistical dependence between words. Improving on previous work, we introduce the use of large pretrained language models to compute contextualized estimates of the pointwise mutual information between words (CPMI). For multiple models and languages, we extract dependency trees which maximize CPMI, and compare to gold standard linguistic dependencies. Overall, we find that CPMI dependencies achieve an unlabelled undirected attachment score of at most $\approx 0.5$. While far above chance, and consistently above a non-contextualized PMI baseline, this score is generally comparable to a simple baseline formed by connecting adjacent words. We analyze which kinds of linguistic dependencies are best captured in CPMI dependencies, and also find marked differences between the estimates of the large pretrained language models, illustrating how their different training schemes affect the type of dependencies they capture.
翻译:在这项工作中,我们广泛分析了语言依赖性和语言之间的统计依赖性之间的关系。改进了以前的工作,我们采用了大型预先培训的语言模型来计算对语言之间点信息(CPMI)的背景估计值。对于多种模式和语言,我们提取依赖性树,以尽量扩大CPMI, 并与金质标准语言依赖性作比较。总体而言,我们发现CPMI依赖性获得无标签的无指导性附着得分,最高为$\approx 0.5美元。虽然远高于机会,而且始终高于非通俗的PMI基线,但这一得分通常与连接相邻语言的简单基线相近。我们分析哪些语言依赖性在CPMI依赖性中最能捕捉到,并发现大型预先培训语言模型的估计数之间存在明显差异,同时说明它们不同的培训计划如何影响它们所捕捉的相互依存性类型。