We introduce a novel approach to incorporate syntax into natural language inference (NLI) models. Our method uses contextual token-level vector representations from a pretrained dependency parser. Like other contextual embedders, our method is broadly applicable to any neural model. We experiment with four strong NLI models (decomposable attention model, ESIM, BERT, and MT-DNN), and show consistent benefit to accuracy across three NLI benchmarks.
翻译:我们引入了一种将语法纳入自然语言推论模式的新办法。 我们的方法使用来自训练有素的受抚养人分析师的背景象征性矢量表。 与其他背景嵌入器一样,我们的方法广泛适用于任何神经模型。 我们试验了四种强大的国家语言推论模型(不相容关注模型、ESIM、BERT和MT-DNN ), 并显示三个国家语言推论基准的准确性始终如一。