Conventional automatic speaker verification systems can usually be decomposed into a front-end model such as time delay neural network (TDNN) for extracting speaker embeddings and a back-end model such as statistics-based probabilistic linear discriminant analysis (PLDA) or neural network-based neural PLDA (NPLDA) for similarity scoring. However, the sequential optimization of the front-end and back-end models may lead to a local minimum, which theoretically prevents the whole system from achieving the best optimization. Although some methods have been proposed for jointly optimizing the two models, such as the generalized end-to-end (GE2E) model and NPLDA E2E model, all of these methods are designed for use with a single enrollment utterance. In this paper, we propose a new E2E joint method for speaker verification especially designed for the practical case of multiple enrollment utterances. In order to leverage the intra-relationship among multiple enrollment utterances, our model comes equipped with frame-level and utterance-level attention mechanisms. We also utilize several data augmentation techniques, including conventional noise augmentation using MUSAN and RIRs datasets and a unique speaker embedding-level mixup strategy for better optimization.
翻译:常规的自动扬声器核查系统通常可以分解成前端模式,例如用于提取演讲者嵌入器的时间延迟神经网络(TDNNN)和用于提取演讲者到终端(GE2E)模型和NPLDA E2E模型等后端模型的后端模型,而所有这些方法都设计用于单一的录入语量分析(PLDA)或基于神经网络的神经PLDA(NPLDA)类似评分。然而,前端和后端模型的顺序优化可能会导致一个本地最小值,从理论上说,这阻碍整个系统实现最佳优化。虽然已经为联合优化这两种模型提出了一些方法,例如通用端对端(GE2E)模型和NPLDA E2E模型,但所有这些方法都是设计用于使用单一录入语量的概率分析(PLDA)分析(PLDA)或基于神经网络的神经神经神经系统(NPLDA)(NPLDA)(NPDA)(NPDA)(NPLDA)(N)或神经网络的类似评分分数)的神经系统),然而,前端和后端系统优化,前端和后端和后端核查系统模式可能会导致地方最小值优化,但前端核查模式可能会导致形成一个最小最小最小最小最小最小最小最小最小最小最小最小最小最小的最小值优化,从而最小最小最小最小化模式,因为前端和后端和后端和后端和后端和后端和后端模型,从而导致最小化的最小化模式,从而导致最小化模式可能导致最小化的最小化的最小化的最小化的最小化的最小化的最小化的最小化的最小化的最小化的最小化的最小化模式可能导致化的最小化的最小化的最小化,但后端和后端和后端和后端和后端和后端和后端和后端调节化,但后端调节式优化优化优化优化优化化模式可能会积化模式可能化,但后端和后端和后端和后端优化化的最小化的最小化,但后端和后端和后端和后端和后端和后端调节化的最小化的最小化的最小化模式可能导致的最小化的最小化,