The speech representations learned from large-scale unlabeled data have shown better generalizability than those from supervised learning and thus attract a lot of interest to be applied for various downstream tasks. In this paper, we explore the limits of speech representations learned by different self-supervised objectives and datasets for automatic speaker verification (ASV), especially with a well-recognized SOTA ASV model, ECAPA-TDNN [1], as a downstream model. The representations from all hidden layers of the pre-trained model are firstly averaged with learnable weights and then fed into the ECAPA-TDNN as input features. The experimental results on Voxceleb dataset show that the weighted average representation is significantly superior to FBank, a conventional handcrafted feature for ASV. Our best single system achieves 0.564%, 0.561%, and 1.230% equal error rate (EER) on the three official trials of VoxCeleb1, separately. Accordingly, the ensemble system with three pre-trained models can further improve the EER to 0.431%, 0.507% and 1.081%. Among the three evaluation trials, our best system outperforms the winner system [2] of the VoxCeleb Speaker Recognition Challenge 2021 (VoxSRC2021) on the VoxCeleb1-E trial.
翻译:从大规模未贴标签的数据中了解到的语音表述比从监督学习中了解到的数据更具有一般性,因此吸引了许多兴趣应用于各种下游任务。在本文件中,我们探索了不同自我监督的目标和自动语音校验数据集(ASV)所学到的语音表述的局限性,特别是一个公认的SOTA ASV模式,ECAPA-TDNN[1],作为一个下游模式。来自预先培训模式所有隐蔽层面的表述,首先以可学习的重量为平均值,然后作为输入功能输入到 ECAPA-TDNN。Voxceeleb数据集的实验结果表明,加权平均表述明显优于FBank,这是ASV的常规手工制作功能。我们最好的单一系统在三次正式试验VoxCeleb1时实现了0.564 %、0.561%和1.230%等误率。因此,具有三种预先培训模式的混合系统可以进一步改进EERPA-VER31%、0.507%和1.081 %。在三次评价试验中,我们最好的单一系统在20CSEviewCS-CScience的20C试验中,1中,我们的最佳系统在20CSEviewCS-CSEvironstorgres1中,在2021最佳系统上比。