Past work probing compositionality in sentence embedding models faces issues determining the causal impact of implicit syntax representations. Given a sentence, we construct a neural module net based on its syntax parse and train it end-to-end to approximate the sentence's embedding generated by a transformer model. The distillability of a transformer to a Syntactic NeurAl Module Net (SynNaMoN) then captures whether syntax is a strong causal model of its compositional ability. Furthermore, we address questions about the geometry of semantic composition by specifying individual SynNaMoN modules' internal architecture & linearity. We find differences in the distillability of various sentence embedding models that broadly correlate with their performance, but observe that distillability doesn't considerably vary by model size. We also present preliminary evidence that much syntax-guided composition in sentence embedding models is linear, and that non-linearities may serve primarily to handle non-compositional phrases.
翻译:过去在嵌入句子模型中检验构成的工作面临如何确定隐含语法表达方式的因果影响的问题。 根据一个句子, 我们根据它的语法分析, 构建一个神经模块网, 并训练它以近似由变压器模型生成的句子嵌入。 变压器的蒸馏性到 Syntistic Neural 模块网( SynNaMoN), 然后捕捉语法是否是其构成能力的强烈因果模型。 此外, 我们通过指定单个的 SynNaMoN 模块的内部结构与线性来解决语义构成的几何几何学问题。 我们发现, 与其性能大致相关的各种句子嵌入模型的可留存性存在差异, 但是观察到, 蒸馏性不会因模型大小而有很大差异。 我们还提出初步证据, 说, 嵌入句式模型中的许多以语法为导导的构成是线性的, 非线性成分可能主要用来处理非语系词句。