Recent works show that speech separation guided diarization (SSGD) is an increasingly promising direction, mainly thanks to the recent progress in speech separation. It performs diarization by first separating the speakers and then applying voice activity detection (VAD) on each separated stream. In this work we conduct an in-depth study of SSGD in the conversational telephone speech (CTS) domain, focusing mainly on low-latency streaming diarization applications. We consider three state-of-the-art speech separation (SSep) algorithms and study their performance both in online and offline scenarios, considering non-causal and causal implementations as well as continuous SSep (CSS) windowed inference. We compare different SSGD algorithms on two widely used CTS datasets: CALLHOME and Fisher Corpus (Part 1 and 2) and evaluate both separation and diarization performance. To improve performance, a novel, causal and computationally efficient leakage removal algorithm is proposed, which significantly decreases false alarms. We also explore, for the first time, fully end-to-end SSGD integration between SSep and VAD modules. Crucially, this enables fine-tuning on real-world data for which oracle speakers sources are not available. In particular, our best model achieves 8.8% DER on CALLHOME, which outperforms the current state-of-the-art end-to-end neural diarization model, despite being trained on an order of magnitude less data and having significantly lower latency, i.e., 0.1 vs. 1 seconds. Finally, we also show that the separated signals can be readily used also for automatic speech recognition, reaching performance close to using oracle sources in some configurations.
翻译:最近的工作表明,语音分离引导的发言人分离(SSGD)是一个越来越有前途的方向,主要得益于语音分离的最新进展。它通过先分离说话者然后在每个分离的流上应用语音活动检测(VAD)来执行发言人分离。在这项工作中,我们对与会者电话语音(CTS)领域的SSGD进行了深入研究,主要关注低延迟流媒体分离器应用。我们考虑三种先进的语音分离(SSep)算法,研究它们的性能,包括非因果和因果实现以及连续SSep(CSS)窗口推理,考虑在线和离线场景。我们在两个广泛使用的CTS数据集CALLHOME和Fisher Corpus(第1部分和第2部分)中比较不同的SSGD算法,并评估分离和分离的性能。为了提高性能,提出了一种新颖的因果性和计算效率高的泄漏去除算法,显着减少了误报率。我们还首次探索了全面端到端的SSGD集成,包括SSep和VAD模块。关键是,这使得可以对现实世界的数据进行微调,不需要使用完美的说话者源。特别地,我们的最佳模型在CALLHOME上实现了8.8%的DER,这超过了当前最先进的端到端神经发言人分离模型,尽管训练了一个数量级更少的数据,并且具有显着更低的延迟,即0.1秒而不是1秒。最后,我们还表明,分离的信号也可以很容易地用于自动语音识别,在某些配置中,性能接近于使用完美的来源。