This report describes the submission of the DKU-DukeECE team to the self-supervision speaker verification task of the 2021 VoxCeleb Speaker Recognition Challenge (VoxSRC). Our method employs an iterative labeling framework to learn self-supervised speaker representation based on a deep neural network (DNN). The framework starts with training a self-supervision speaker embedding network by maximizing agreement between different segments within an utterance via a contrastive loss. Taking advantage of DNN's ability to learn from data with label noise, we propose to cluster the speaker embedding obtained from the previous speaker network and use the subsequent class assignments as pseudo labels to train a new DNN. Moreover, we iteratively train the speaker network with pseudo labels generated from the previous step to bootstrap the discriminative power of a DNN. Also, visual modal data is incorporated in this self-labeling framework. The visual pseudo label and the audio pseudo label are fused with a cluster ensemble algorithm to generate a robust supervisory signal for representation learning. Our submission achieves an equal error rate (EER) of 5.58% and 5.59% on the challenge development and test set, respectively.
翻译:本报告介绍了DKU-DukeECE团队向2021 VoxCeleb 发言人承认挑战(VoxSRC)自我监督的演讲者核查任务提交的DKU-DukeECE团队。我们的方法使用一个迭代标签框架,学习基于深神经网络的自我监督的演讲者代表。这个框架首先培训一个自我监督的演讲者嵌入网络,通过通过对比性损失在发言中最大限度地实现不同部分之间的一致。利用DNN从标签噪音数据中学习的能力,我们提议将从上一个演讲者网络中嵌入的演讲者集中起来,并使用随后的类分配作为假标签来训练一个新的DNN。此外,我们反复培训演讲者网络,用从前一步中生成的假标签来束缚DNN的歧视性力量。此外,视觉模式数据也被纳入这一自我标签框架。视觉假标签和音假标签与集组合组合组合算法相结合,以生成一个强大的代表学习监测信号。我们提交的呈件分别达到5.58 % 和5.59 % 的挑战测试和设置上, 测试和设置。