Recent studies have shown that pseudo labels can contribute to unsupervised domain adaptation (UDA) for speaker verification. Inspired by the self-training strategies that use an existing classifier to label the unlabeled data for retraining, we propose a cluster-guided UDA framework that labels the target domain data by clustering and combines the labeled source domain data and pseudo-labeled target domain data to train a speaker embedding network. To improve the cluster quality, we train a speaker embedding network dedicated for clustering by minimizing the contrastive center loss. The goal is to reduce the distance between an embedding and its assigned cluster center while enlarging the distance between the embedding and the other cluster centers. Using VoxCeleb2 as the source domain and CN-Celeb1 as the target domain, we demonstrate that the proposed method can achieve an equal error rate (EER) of 8.10% on the CN-Celeb1 evaluation set without using any labels from the target domain. This result outperforms the supervised baseline by 39.6% and is the state-of-the-art UDA performance on this corpus.
翻译:近期的研究表明,伪标签可以有助于无监督域自适应 (UDA) 模型的训练。受到使用现有分类器为分类未标注数据进行重新训练的自训练方法的启发,本文提出了一种聚类引导的 UDA 框架,通过聚类来为目标领域数据打上标签,并结合有标签的源领域数据和伪标签来训练说话人嵌入网络。为了提高聚类的质量,我们通过最小化对比中心损失来训练用于聚类的说话人嵌入网络。目标是减少嵌入向其指定的聚类中心的距离,同时放大距离与其它聚类中心的距离。我们使用 VoxCeleb2 作为源领域,使用 CN-Celeb1 作为目标领域,在不使用目标领域标签的情况下,我们的方法在 CN-Celeb1 评估集上达到了 8.10% 的等错误率 (EER),比有监督基线的性能高出 39.6% ,是该语料库的 UDA 的最佳表现。