A great challenge in speaker representation learning using deep models is to design learning objectives that can enhance the discrimination of unseen speakers under unseen domains. This work proposes a supervised contrastive learning objective to learn a speaker embedding space by effectively leveraging the label information in the training data. In such a space, utterance pairs spoken by the same or similar speakers will stay close, while utterance pairs spoken by different speakers will be far apart. For each training speaker, we perform random data augmentation on their utterances to form positive pairs, and utterances from different speakers form negative pairs. To maximize speaker separability in the embedding space, we incorporate the additive angular-margin loss into the contrastive learning objective. Experimental results on CN-Celeb show that this new learning objective can cause ECAPA-TDNN to find an embedding space that exhibits great speaker discrimination. The contrastive learning objective is easy to implement, and we provide PyTorch code at https://github.com/shanmon110/AAMSupCon.
翻译:使用深层模型进行语音代表学习的巨大挑战是设计学习目标,从而在无形领域加强对看不见的讲者的歧视。这项工作提出了一个监督的对比性学习目标,通过有效利用培训数据中的标签信息学习发言者嵌入空间。在这样一个空间,同一或类似的发言者的配对话将保持近距离,而不同发言者的配对话将大相径庭。对于每个培训发言者,我们随机增加其言论的数据,以形成正面的配对,不同发言者的配方话则以负面的配对形式表达。为了最大限度地增加嵌入空间中的演讲者的分离性,我们将三角形损失添加到对比性学习目标中。CN-Celeb的实验结果显示,这一新学习目标可以使ECAPA-TDNN找到一个嵌入空间,展示出伟大的演讲者歧视。对比性学习目标很容易实现,我们在 https://github.com/shanmon110/AMSupCon提供PyTorch代码。