Modern natural language understanding models depend on pretrained subword embeddings, but applications may need to reason about words that were never or rarely seen during pretraining. We show that examples that depend critically on a rarer word are more challenging for natural language inference models. Then we explore how a model could learn to use definitions, provided in natural text, to overcome this handicap. Our model's understanding of a definition is usually weaker than a well-modeled word embedding, but it recovers most of the performance gap from using a completely untrained word.
翻译:现代自然语言理解模型取决于预先培训的子字嵌入,但应用可能需要解释在预培训期间从未或很少见到的字眼。 我们显示,对于自然语言推论模型来说,关键依赖稀有字眼的例子更具有挑战性。 然后我们探索一个模型如何学会使用自然文本提供的定义来克服这一障碍。 我们的模型对定义的理解通常比完善的字眼嵌入要弱,但是它从使用完全未经训练的字眼中恢复了大部分的性能差距。