Distilling state-of-the-art transformer models into lightweight student models is an effective way to reduce computation cost at inference time. The student models are typically compact transformers with fewer parameters, while expensive operations such as self-attention persist. Therefore, the improved inference speed may still be unsatisfactory for real-time or high-volume use cases. In this paper, we aim to further push the limit of inference speed by distilling teacher models into bigger, sparser student models -- bigger in that they scale up to billions of parameters; sparser in that most of the model parameters are n-gram embeddings. Our experiments on six single-sentence text classification tasks show that these student models retain 97% of the RoBERTa-Large teacher performance on average, and meanwhile achieve up to 600x speed-up on both GPUs and CPUs at inference time. Further investigation reveals that our pipeline is also helpful for sentence-pair classification tasks, and in domain generalization settings.
翻译:将最先进的变压器模型蒸馏成轻量级学生模型是降低推论时间计算成本的有效方法。 学生模型通常是小型变压器,参数较少,而诸如自我注意等费用昂贵的操作则持续存在。 因此,对于实时或高容量使用案例而言,提高的推论速度可能仍然不尽人意。 在本文中,我们的目标是通过将教师模型蒸馏成更大、更稀疏的学生模型来进一步推导推导速度的限度 -- -- 其规模大于数十亿个参数;大多数模型参数为n-grom嵌入的稀疏。 我们在六种单一感化文本分类方面的实验显示,这些学生模型平均保留了RoBERTA-Large教师的97%的成绩,同时在推论时间达到600x速度,包括GPU和CPUs。 进一步的调查显示,我们的输油管对判决-pair分类任务和广域化环境也有帮助。