Probes are models devised to investigate the encoding of knowledge -- e.g. syntactic structure -- in contextual representations. Probes are often designed for simplicity, which has led to restrictions on probe design that may not allow for the full exploitation of the structure of encoded information; one such restriction is linearity. We examine the case of a structural probe (Hewitt and Manning, 2019), which aims to investigate the encoding of syntactic structure in contextual representations through learning only linear transformations. By observing that the structural probe learns a metric, we are able to kernelize it and develop a novel non-linear variant with an identical number of parameters. We test on 6 languages and find that the radial-basis function (RBF) kernel, in conjunction with regularization, achieves a statistically significant improvement over the baseline in all languages -- implying that at least part of the syntactic knowledge is encoded non-linearly. We conclude by discussing how the RBF kernel resembles BERT's self-attention layers and speculate that this resemblance leads to the RBF-based probe's stronger performance.
翻译:检测是用来调查知识编码的模型,例如综合结构等,在背景陈述中,检测是用来调查知识编码的模型;检测往往是设计为简单化的,这导致对探测设计的限制,可能不允许充分利用编码信息的结构;一种限制是线性;我们研究了结构探测器(Hewit和Manning,2019年)的情况,目的是通过只学习线性转换来调查背景陈述中合成结构编码的编码;通过观察结构探测器学会了一种指标,我们能够将它凝固起来,并开发出一个新的非线性变异体,具有相同数目的参数。我们测试了6种语言,发现辐射基(RBF)的内核功能与正规化相结合,在统计上大大改进了所有语文的基线 -- 意味着至少部分合成知识是非线性变的编码。我们通过讨论RBBFkernel如何像BERT的自我注意层,并推测这一近似性导致RBFF的探测器更强大的性能。