We introduce DIVE, an end-to-end speaker diarization algorithm. Our neural algorithm presents the diarization task as an iterative process: it repeatedly builds a representation for each speaker before predicting the voice activity of each speaker conditioned on the extracted representations. This strategy intrinsically resolves the speaker ordering ambiguity without requiring the classical permutation invariant training loss. In contrast with prior work, our model does not rely on pretrained speaker representations and optimizes all parameters of the system with a multi-speaker voice activity loss. Importantly, our loss explicitly excludes unreliable speaker turn boundaries from training, which is adapted to the standard collar-based Diarization Error Rate (DER) evaluation. Overall, these contributions yield a system redefining the state-of-the-art on the standard CALLHOME benchmark, with 6.7% DER compared to 7.8% for the best alternative.
翻译:我们引入了端对端扬声diarization算法Dive。 我们的神经算法将diariz化任务作为一个迭接过程:它反复为每个发音人建立代表,然后预测每个发音人以提取的演示为条件的语音活动。这个战略内在地解决了发音人要求模糊,而不需要传统的变换培训损失。与以前的工作不同,我们的模式并不依赖预先培训的语音表达,而是以多发音活动损失优化系统的所有参数。 重要的是,我们的损失明确排除了不可靠的扬声人从培训中转动界限,而培训是适应标准的基于领的 Diarization错误率(DER)评估的。总的来说,这些贡献产生了一个系统,重新定义了CALHOME标准基准上的最新艺术,即6.7%的DER,而最佳选择是7.8%的DER。