Despite the feature of real-time decoding, Monotonic Multihead Attention (MMA) shows comparable performance to the state-of-the-art offline methods in machine translation and automatic speech recognition (ASR) tasks. However, the latency of MMA is still a major issue in ASR and should be combined with a technique that can reduce the test latency at inference time, such as head-synchronous beam search decoding, which forces all non-activated heads to activate after a small fixed delay from the first head activation. In this paper, we remove the discrepancy between training and test phases by considering, in the training of MMA, the interactions across multiple heads that will occur in the test time. Specifically, we derive the expected alignments from monotonic attention by considering the boundaries of other heads and reflect them in the learning process. We validate our proposed method on the two standard benchmark datasets for ASR and show that our approach, MMA with the mutually-constrained heads from the training stage, provides better performance than baselines.
翻译:尽管实时解码特点,单声道多头注意(MMA)的功能与机器翻译和自动语音识别(ASR)任务中最先进的离线方法相似,然而,MMA的延迟性仍然是ASR中的一个主要问题,应当与能够减少推论时间的试验延缓性的技术相结合,例如头同步波束搜索解码法,它迫使所有未激活的头部在第一次启动头部启动后稍有固定的延迟后启动。在本文件中,我们通过在培训MMA时考虑多个头部之间的相互作用来消除培训和测试阶段之间的差异。具体地说,我们考虑其他头部的边界并在学习过程中反映它们,从而从单调的注意力中得出预期的一致性。我们验证了我们关于ASR两个标准基准数据集的拟议方法,并表明我们的方法,即与训练阶段相互约束的头部,与训练阶段的头部,所提供的业绩优于基线。