Recent years have witnessed a boom in self-supervised learning (SSL) in various areas including speech processing. Speech based SSL models present promising performance in a range of speech related tasks. However, the training of SSL models is computationally expensive and a common practice is to fine-tune a released SSL model on the specific task. It is essential to use consistent front-end input during pre-training and fine-tuning. This consistency may introduce potential issues when the optimal front-end is not the same as that used in pre-training. In this paper, we propose a simple but effective front-end adapter to address this front-end discrepancy. By minimizing the distance between the outputs of different front-ends, the filterbank feature (Fbank) can be compatible with SSL models which are pre-trained with waveform. The experiment results demonstrate the effectiveness of our proposed front-end adapter on several popular SSL models for the speech recognition task.
翻译:近年来,在包括语言处理在内的各个领域,自我监督的学习(SSL)激增。基于语音的SSL模式在一系列与语音有关的任务中呈现出有希望的表现。然而,对SSL模式的培训在计算上是昂贵的,通常的做法是微调已释放的SSL模式的具体任务。在预培训和微调期间必须使用一致的前端输入。当最佳前端与预培训前端不同时,这种一致性可能会带来潜在的问题。在本文中,我们提出一个简单而有效的前端适配器来解决这一前端差异。通过尽可能缩小不同前端输出之间的距离,过滤库特性(Fbank)可以与预先经过波形训练的SSL模式兼容。实验结果表明我们提议的前端调整器在语音识别任务的几种流行的SSL模型上的有效性。