In this work we propose a novel token-based training strategy that improves Transformer-Transducer (T-T) based speaker change detection (SCD) performance. The conventional T-T based SCD model loss optimizes all output tokens equally. Due to the sparsity of the speaker changes in the training data, the conventional T-T based SCD model loss leads to sub-optimal detection accuracy. To mitigate this issue, we use a customized edit-distance algorithm to estimate the token-level SCD false accept (FA) and false reject (FR) rates during training and optimize model parameters to minimize a weighted combination of the FA and FR, focusing the model on accurately predicting speaker changes. We also propose a set of evaluation metrics that align better with commercial use cases. Experiments on a group of challenging real-world datasets show that the proposed training method can significantly improve the overall performance of the SCD model with the same number of parameters.
翻译:在这项工作中,我们提出一个新的基于象征性的培训战略,改善基于变换器-变换器-变换器(T-T)的扬声器变换检测(SCD)性能。传统的基于T-T的SCD模型损失使所有输出符号均匀优化。由于调音器对培训数据变化的广度,传统基于T-T的SCD模型损失导致次最佳检测准确性。为了缓解这一问题,我们在培训期间使用定制编辑-距离算法估算象征性SCD级虚假接受(FA)和虚假拒绝(FR)的比率,优化模型参数,以尽量减少FA和FR的加权组合,将模型的重点放在准确预测扬声器变化上。我们还提出了一套与商业使用案例更加一致的评价指标。在一组具有挑战性的现实世界数据集上进行的实验表明,拟议的培训方法可以大大改善SCD模型的总体性能,其参数数目相同。