This work studies the use of attention masking in transformer transducer based speech recognition for building a single configurable model for different deployment scenarios. We present a comprehensive set of experiments comparing fixed masking, where the same attention mask is applied at every frame, with chunked masking, where the attention mask for each frame is determined by chunk boundaries, in terms of recognition accuracy and latency. We then explore the use of variable masking, where the attention masks are sampled from a target distribution at training time, to build models that can work in different configurations. Finally, we investigate how a single configurable model can be used to perform both first pass streaming recognition and second pass acoustic rescoring. Experiments show that chunked masking achieves a better accuracy vs latency trade-off compared to fixed masking, both with and without FastEmit. We also show that variable masking improves the accuracy by up to 8% relative in the acoustic re-scoring scenario.
翻译:本文研究了在Transformer Transducer语音识别中使用关注掩码构建单个可配置模型以满足不同部署场景的需求。我们展示了一组全面的实验,比较了固定掩码和分块掩码两种方法在识别准确率和延迟方面的差异。同时,我们还探讨了如何使用可变掩码进行实验,即在训练时从目标分布中采样Attention masks,以构建能够应对不同配置的模型。最后,我们还研究了如何使用单个可配置模型来执行流式识别和声学重新打分。实验结果表明,相较于固定掩码,在FastEmit上采用分块掩码能够更好地平衡准确率和延迟之间的关系,并且在声学重新打分场景下,可变掩码能够使准确率相对提高8%。