Knowledge distillation(KD) is a common approach to improve model performance in automatic speech recognition (ASR), where a student model is trained to imitate the output behaviour of a teacher model. However, traditional KD methods suffer from teacher label storage issue, especially when the training corpora are large. Although on-the-fly teacher label generation tackles this issue, the training speed is significantly slower as the teacher model has to be evaluated every batch. In this paper, we reformulate the generation of teacher label as a codec problem. We propose a novel Multi-codebook Vector Quantization (MVQ) approach that compresses teacher embeddings to codebook indexes (CI). Based on this, a KD training framework (MVQ-KD) is proposed where a student model predicts the CI generated from the embeddings of a self-supervised pre-trained teacher model. Experiments on the LibriSpeech clean-100 hour show that MVQ-KD framework achieves comparable performance as traditional KD methods (l1, l2), while requiring 256 times less storage. When the full LibriSpeech dataset is used, MVQ-KD framework results in 13.8% and 8.2% relative word error rate reductions (WERRs) for non -streaming transducer on test-clean and test-other and 4.0% and 4.9% for streaming transducer. The implementation of this work is already released as a part of the open-source project icefall.
翻译:知识蒸馏( KD) 是提高自动语音识别( ASR) 模型性能的一种常见方法, 学生模型可以学习模仿教师模型的输出行为。 然而, 传统的 KD 方法存在教师标签存储问题, 特别是当培训公司规模很大时。 尽管在现场教师标签生成解决了这一问题, 但培训速度要慢得多, 因为教师模型每批都要评估教师模式。 在本文中, 我们重新将教师标签的生成作为一个代码问题。 我们提议了一种新的多代码手册Vector Qautization (MVQ) 方法, 将教师压缩到代码索引(CI) 。 在此基础上, 提出了 KD 培训框架( MVQ- KD ), 其中学生模型预测了由自我监督的预培训教师模型的嵌入生成的 CCI。 在LibriSpeech cle-100小时的实验实验中, MVQ- KD 框架以传统的 KD方法( l1, l2) 和 需要 256倍的存储。 当整个LBS- REBER 测试框架中, 和 REVEBER 的缩缩缩缩数是用于13- 和 RBLELELI 的缩缩缩缩缩算的测试框架时, 。