Background noise is a well-known factor that deteriorates the accuracy and reliability of speaker verification (SV) systems by blurring speech intelligibility. Various studies have used separate pretrained enhancement models as the front-end module of the SV system in noisy environments, and these methods effectively remove noises. However, the denoising process of independent enhancement models not tailored to the SV task can also distort the speaker information included in utterances. We argue that the enhancement network and speaker embedding extractor should be fully jointly trained for SV tasks under noisy conditions to alleviate this issue. Therefore, we proposed a U-Net-based integrated framework that simultaneously optimizes speaker identification and feature enhancement losses. Moreover, we analyzed the structural limitations of using U-Net directly for noise SV tasks and further proposed Extended U-Net to reduce these drawbacks. We evaluated the models on the noise-synthesized VoxCeleb1 test set and VOiCES development set recorded in various noisy scenarios. The experimental results demonstrate that the U-Net-based fully joint training framework is more effective than the baseline, and the extended U-Net exhibited state-of-the-art performance versus the recently proposed compensation systems.
翻译:背景噪音是一个众所周知的因素,它通过模糊语音感知性而使语音核查系统更加精确和可靠。各种研究都使用单独的预先培训的增强模型作为超音频系统在噪音环境中的前端模块,这些方法有效地消除噪音。然而,不专门为SV任务定制的独立增强模型的分层进程也可能扭曲发言中包含的信息。我们主张,增强网络和语音嵌入提取器应在噪音条件下为SV任务进行充分联合培训,以缓解这一问题。因此,我们提议了一个基于U-Net的综合框架,同时优化语音识别和功能增强损失。此外,我们分析了直接使用U-Net进行噪音SV任务的结构性限制,并进一步提议扩大U-Net来减少这些缺陷。我们评估了各种噪音合成VoxCeleb1测试集和在噪音情景下记录的VoICES开发模型。实验结果表明,基于U-Net的全面联合培训框架最近比基线更为有效,而扩大的U-Net展示状态性能补偿框架则与拟议的状态性能系统对比。