Recently, the pre-trained Transformer models have received a rising interest in the field of speech processing thanks to their great success in various downstream tasks. However, most fine-tuning approaches update all the parameters of the pre-trained model, which becomes prohibitive as the model size grows and sometimes results in overfitting on small datasets. In this paper, we conduct a comprehensive analysis of applying parameter-efficient transfer learning (PETL) methods to reduce the required learnable parameters for adapting to speaker verification tasks. Specifically, during the fine-tuning process, the pre-trained models are frozen, and only lightweight modules inserted in each Transformer block are trainable (a method known as adapters). Moreover, to boost the performance in a cross-language low-resource scenario, the Transformer model is further tuned on a large intermediate dataset before directly fine-tuning it on a small dataset. With updating fewer than 4% of parameters, (our proposed) PETL-based methods achieve comparable performances with full fine-tuning methods (Vox1-O: 0.55%, Vox1-E: 0.82%, Vox1-H:1.73%).
翻译:最近,由于在各种下游任务中取得了巨大成功,经过培训的变换模型在语音处理领域受到越来越多的关注。然而,大多数微调方法更新了经过培训的模型的所有参数,随着模型规模的扩大而变得令人望而却步,有时还导致小型数据集的过度配置。在本文件中,我们全面分析了应用参数效率转移学习(PETL)方法,以减少适应演讲者校验任务所需的可学习参数。具体地说,在微调过程中,经过培训的模型被冻结,每个变换器块中只有轻量模块是可训练的(一种称为适应器的方法)。此外,为了提高跨语言低资源情景中的性能,变换模型在直接微调小数据集之前,会进一步调整大型中间数据集。由于更新的参数不到4%,(我们提议)基于PETL方法的参数以完全微调方法(Vox1-O:0.55 %,Vox1-E:0.82%,Vox1-H:1.73%)达到可比的性能。