The pre-training models such as BERT have achieved great results in various natural language processing problems. However, a large number of parameters need significant amounts of memory and the consumption of inference time, which makes it difficult to deploy them on edge devices. In this work, we propose a knowledge distillation method LRC-BERT based on contrastive learning to fit the output of the intermediate layer from the angular distance aspect, which is not considered by the existing distillation methods. Furthermore, we introduce a gradient perturbation-based training architecture in the training phase to increase the robustness of LRC-BERT, which is the first attempt in knowledge distillation. Additionally, in order to better capture the distribution characteristics of the intermediate layer, we design a two-stage training method for the total distillation loss. Finally, by verifying 8 datasets on the General Language Understanding Evaluation (GLUE) benchmark, the performance of the proposed LRC-BERT exceeds the existing state-of-the-art methods, which proves the effectiveness of our method.
翻译:诸如BERT等培训前模型在各种自然语言处理问题上取得了巨大成果,然而,大量参数需要大量记忆和吸收推论时间,因此难以将其放置在边缘装置上。在这项工作中,我们根据对比性学习,建议采用LRC-BERT方法,使中间层的输出与角距离相适应,而现有的蒸馏方法并不考虑这一点。此外,我们在培训阶段引入了基于梯度的扰动性培训结构,以提高LRC-BERT的稳健性,这是在知识蒸馏方面的第一次尝试。此外,为了更好地捕捉中间层的分布特征,我们设计了一种双阶段培训方法,用于蒸馏全部损失。最后,通过核实通用语言理解评价基准的8个数据集,拟议的LRC-BERT的性能超过了证明我们方法有效性的现有最新方法。