The additive margin softmax (AM-Softmax) loss has delivered remarkable performance in speaker verification. A supposed behavior of AM-Softmax is that it can shrink within-class variation by putting emphasis on target logits, which in turn improves margin between target and non-target classes. In this paper, we conduct a careful analysis on the behavior of AM-Softmax loss, and show that this loss does not implement real max-margin training. Based on this observation, we present a Real AM-Softmax loss which involves a true margin function in the softmax training. Experiments conducted on VoxCeleb1, SITW and CNCeleb demonstrated that the corrected AM-Softmax loss consistently outperforms the original one. The code has been released at https://gitlab.com/csltstu/sunine.
翻译:添加性差软体( AM- Softmax) 损失在扬声器校验中表现显著。 AM- Softmax 的假定行为是,通过强调目标日志,可以缩小阶级内部差异,这反过来提高了目标类和非目标类的差幅。 在本文中,我们对AM- Softmax 损失的行为进行了仔细分析,并表明这一损失没有进行真正的最大差幅培训。 基于这一观察, 我们展示了真实的AM- Softmax 损失, 这涉及到软体训练的真正差值。 在VoxCeleb1、SITW和CNCeleb上进行的实验表明,纠正的AM- Softmax损失始终高于原来的值。 代码已在 https:// gitlab. com/ csltstu/suine 发布。