Structural probing work has found evidence for latent syntactic information in pre-trained language models. However, much of this analysis has focused on monolingual models, and analyses of multilingual models have employed correlational methods that are confounded by the choice of probing tasks. In this study, we causally probe multilingual language models (XGLM and multilingual BERT) as well as monolingual BERT-based models across various languages; we do this by performing counterfactual perturbations on neuron activations and observing the effect on models' subject-verb agreement probabilities. We observe where in the model and to what extent syntactic agreement is encoded in each language. We find significant neuron overlap across languages in autoregressive multilingual language models, but not masked language models. We also find two distinct layer-wise effect patterns and two distinct sets of neurons used for syntactic agreement, depending on whether the subject and verb are separated by other tokens. Finally, we find that behavioral analyses of language models are likely underestimating how sensitive masked language models are to syntactic information.
翻译:在经过培训的语文模型中,结构性研究找到了潜在综合信息的证据。然而,许多分析都集中在单一语言模型上,多语言模型的分析采用了相关方法,这些方法被选择的检验任务所混淆。在本研究中,我们因果地探索了多种语言模型(XGLM和多语言BERT)以及各种语言的单一语言BERT模型;我们这样做的方法是对神经激活进行反事实干扰,并观察对模型主题-语言协议概率的影响。我们观察了模型中哪些地方和每种语言对合成协议进行编码的程度。我们在自动递增多语言模型中发现不同语言的神经重叠很大,但没有发现遮掩语言模型。我们还发现了两种不同的分层效应模式和用于合成协议的两套不同的神经元,这取决于该主题和动词是否被其他符号分离。最后,我们发现对语言模型的行为分析可能低估了敏感隐蔽语言模型是如何进入合成信息的。