In this paper, we introduce the Kaizen framework that uses a continuously improving teacher to generate pseudo-labels for semi-supervised speech recognition (ASR). The proposed approach uses a teacher model which is updated as the exponential moving average (EMA) of the student model parameters. We demonstrate that it is critical for EMA to be accumulated with full-precision floating point. The Kaizen framework can be seen as a continuous version of the iterative pseudo-labeling approach for semi-supervised training. It is applicable for different training criteria, and in this paper we demonstrate its effectiveness for frame-level hybrid hidden Markov model-deep neural network (HMM-DNN) systems as well as sequence-level Connectionist Temporal Classification (CTC) based models. For large scale real-world unsupervised public videos in UK English and Italian languages the proposed approach i) shows more than 10% relative word error rate (WER) reduction over standard teacher-student training; ii) using just 10 hours of supervised data and a large amount of unsupervised data closes the gap to the upper-bound supervised ASR system that uses 650h or 2700h respectively.
翻译:在本文中,我们引入了Kaizen框架,该框架使用不断改进的教师为半监督语音识别生成假标签(ASR)。拟议方法使用作为学生模型参数指数移动平均数(EMA)更新的教师模型。我们证明,对于学生模型参数的指数移动平均数(EMA)至关重要。对于学生模型参数参数的指数移动平均数(EMA),我们展示了该模型对于以全精度浮动点积累EMA至关重要。Kaizen框架可被视为半监督培训的迭代假标签方法的连续版本。它适用于不同的培训标准,在本文中,我们展示了它对于框架级混合隐藏的Markov 模型深神经网络(HMM-DNN) 系统以及序列级连接时间分类(CTCS) 模型的有效性。对于大规模真实世界不受监督的英语和意大利语公共视频来说,拟议方法(i) 显示比标准师资培训减少10%以上的相对字差差率(WER) ;二) 仅使用10小时的监督数据和大量未经监督的数据,从而缩小了分别使用650或2700小时或27时的高层监督的受监管的ASR系统的鸿沟。