Recently, end-to-end neural diarization (EEND) is introduced and achieves promising results in speaker-overlapped scenarios. In EEND, speaker diarization is formulated as a multi-label prediction problem, where speaker activities are estimated independently and their dependency are not well considered. To overcome these disadvantages, we employ the power set encoding to reformulate speaker diarization as a single-label classification problem and propose the overlap-aware EEND (EEND-OLA) model, in which speaker overlaps and dependency can be modeled explicitly. Inspired by the success of two-stage hybrid systems, we further propose a novel Two-stage OverLap-aware Diarization framework (TOLD) by involving a speaker overlap-aware post-processing (SOAP) model to iteratively refine the diarization results of EEND-OLA. Experimental results show that, compared with the original EEND, the proposed EEND-OLA achieves a 14.39% relative improvement in terms of diarization error rates (DER), and utilizing SOAP provides another 19.33% relative improvement. As a result, our method TOLD achieves a DER of 10.14% on the CALLHOME dataset, which is a new state-of-the-art result on this benchmark to the best of our knowledge.
翻译:最近,引入了端到端神经二分化(EEND),并在发言者过多的假设情景中取得了有希望的结果。在EEND中,发言者二分化被作为一个多标签的预测问题提出,在多标签的预测问题上,发言者活动是独立估计的,其依赖性没有得到很好的考虑。为了克服这些缺点,我们使用电源集集成编码将发言者二分化作为单一标签分类问题进行重新拟订,并提议使用有共鸣的EEND-OLA(EEND-OLA)模式,其中发言者的重叠和依赖性可以明确地建模。在两阶段混合系统的成功激励下,我们进一步提出一个新的两阶段超水平的二分化框架(TOLD),让发言者对后处理(SOAP)模式进行独立估计,以迭接的方式改进EEND-OLA的分化结果。实验结果表明,与最初的EEND-OLA模式相比,拟议的EEND-OLA在分解误率方面实现了14 %的相对改进。我们使用SOAP的另外提供了19.33%的相对改进。作为结果,我们的ALD-D-D-DDDMA取得了我们新的数据基准的结果。</s>