Recently, masked prediction pre-training has seen remarkable progress in self-supervised learning (SSL) for speech recognition. It usually requires a codebook obtained in an unsupervised way, making it less accurate and difficult to interpret. We propose two supervision-guided codebook generation approaches to improve automatic speech recognition (ASR) performance and also the pre-training efficiency, either through decoding with a hybrid ASR system to generate phoneme-level alignments (named PBERT), or performing clustering on the supervised speech features extracted from an end-to-end CTC model (named CTC clustering). Both the hybrid and CTC models are trained on the same small amount of labeled speech as used in fine-tuning. Experiments demonstrate significant superiority of our methods to various SSL and self-training baselines, with up to 17.0% relative WER reduction. Our pre-trained models also show good transferability in a non-ASR speech task.
翻译:最近,蒙面预测培训前在自我监督的语音识别学习(SSL)方面取得显著进展,通常要求以不受监督的方式获得守则手册,使其不那么准确和难以解释。我们提出两种监督指导手册生成方法,以提高自动语音识别(ASR)的性能以及培训前的效率,或者与混合的ASR系统解码,以生成语音级别的校对(PBERT),或者对从终端到终端的CTC模式(称为CTC集群)中提取的受监督的语音特征进行分组。混合和CTC模式都接受与微调中使用的同样少量的标签语言培训。实验显示,我们的方法在各种SSL和自培训基线上占有显著优势,其相对削减幅度高达17.0%。我们经过培训的模型还显示,在非ASR演讲任务中可以很好地转换。