Attention based models have become the new state-of-the-art in natural language understanding tasks such as question-answering and sentence similarity. Recent models, such as BERT and XLNet, score a pair of sentences (A and B) using multiple cross-attention operations - a process in which each word in sentence A attends to all words in sentence B and vice versa. As a result, computing the similarity between a query sentence and a set of candidate sentences, requires the propagation of all query-candidate sentence-pairs throughout a stack of cross-attention layers. This exhaustive process becomes computationally prohibitive when the number of candidate sentences is large. In contrast, sentence embedding techniques learn a sentence-to-vector mapping and compute the similarity between the sentence vectors via simple elementary operations such as dot product or cosine similarity. In this paper, we introduce a sentence embedding method that is based on knowledge distillation from cross-attentive models, focusing on sentence-pair tasks. The outline of the proposed method is as follows: Given a cross-attentive teacher model (e.g. a fine-tuned BERT), we train a sentence embedding based student model to reconstruct the sentence-pair scores obtained by the teacher model. We empirically demonstrate the effectiveness of our distillation method on five GLUE sentence-pair tasks. Our method significantly outperforms several ELMO variants and other sentence embedding methods, while accelerating computation of the query-candidate sentence-pairs similarities by several orders of magnitude, with an average relative degradation of 4.6% compared to BERT.
翻译:以关注为基础的模型已成为自然语言理解性(如问答和句子相似性)任务中新的最先进艺术。 BERT 和 XLNet 等最新模型使用多个交叉注意操作来评分一对句子( A和 B), 这一过程使A 句中的每个词都包含B句中的所有词, 反之亦然。 因此, 计算一个查询句子和一组候选句子之间的相似性, 需要在一组交叉注意层中传播所有查询访问感知句子。 这个详尽的过程在候选命令数量大时就变得无法计算。 相比之下, 嵌入判决技术会学习一个从句子到变量的绘图, 通过简单的初级操作( 如 dot med 产品或 Cosine 相似性等) 来计算句子矢量的相似性。 因此, 我们引入一个基于从交叉注意模式中提取知识的嵌入式方法, 侧重于句子。 拟议方法的大纲如下: 鉴于一个跨惯性教师模型模型模型, 和 相对递增性判法, 将我们演示了一种升级方法, 我们的实验性排序方法 。