Knowledge distillation is often used to transfer knowledge from a strong teacher model to a relatively weak student model. Traditional knowledge distillation methods include response-based methods and feature-based methods. Response-based methods are used the most widely but suffer from lower upper limit of model performance, while feature-based methods have constraints on the vocabularies and tokenizers. In this paper, we propose a tokenizer-free method liberal feature-based distillation (LEAD). LEAD aligns the distribution between teacher model and student model, which is effective, extendable, portable and has no requirements on vocabularies, tokenizer, or model architecture. Extensive experiments show the effectiveness of LEAD on several widely-used benchmarks, including MS MARCO Passage, TREC Passage 19, TREC Passage 20, MS MARCO Document, TREC Document 19 and TREC Document 20.
翻译:知识蒸馏常常用于将知识从一个强大的教师模式转移到一个相对薄弱的学生模式,传统知识蒸馏方法包括基于反应的方法和基于特征的方法,基于反应的方法使用得最为广泛,但模式性能的上限较低,而基于特征的方法对词汇和象征性品有限制,在本文中,我们提议采用无象征剂方法的基于特征的自由蒸馏法(LEAD)。领导领导将教师模式和学生模式之间的分配加以协调,这种分配是有效、可推广的、可移植的,对词汇、象征性品或建模没有要求。广泛的实验显示LEAD在若干广泛使用的基准上的有效性,包括MS MARCO通道、TREC通道19、TREC通道20、MS MARCO文档、TREC文件19和TREC文件20。