The recently proposed Conformer architecture has shown state-of-the-art performances in Automatic Speech Recognition by combining convolution with attention to model both local and global dependencies. In this paper, we study how to reduce the Conformer architecture complexity with a limited computing budget, leading to a more efficient architecture design that we call Efficient Conformer. We introduce progressive downsampling to the Conformer encoder and propose a novel attention mechanism named grouped attention, allowing us to reduce attention complexity from $O(n^{2}d)$ to $O(n^{2}d / g)$ for sequence length $n$, hidden dimension $d$ and group size parameter $g$. We also experiment the use of strided multi-head self-attention as a global downsampling operation. Our experiments are performed on the LibriSpeech dataset with CTC and RNN-Transducer losses. We show that within the same computing budget, the proposed architecture achieves better performances with faster training and decoding compared to the Conformer. Our 13M parameters CTC model achieves competitive WERs of 3.6%/9.0% without using a language model and 2.7%/6.7% with an external n-gram language model on the test-clean/test-other sets while being 29% faster than our CTC Conformer baseline at inference and 36% faster to train.
翻译:最近提议的 Confer 架构显示,在自动语音识别中,通过结合对本地和全球依赖模式的注意,显示在自动语音识别方面的最先进的表现。 在本文中,我们研究如何以有限的计算预算降低 Confer 架构的复杂性,从而导致一个我们称为高效 Confer 的更有效架构设计。我们向 Confer 编码编码器引入了渐进的缩小抽样,并提议了一个名为群集关注的新的关注机制,使我们能够将注意力的复杂性从$O(n ⁇ 2d) 美元降低到$O(n ⁇ 2}d/g) 美元,用于序列长度为$(美元)、隐藏的维度($d)和组规模参数($g) 。我们研究如何以有限的计算预算来降低 Confred 结构的复杂性,从而降低 Conforested Conformation 结构的复杂性, 从而导致以有限的计算出一个称为效率更高的结构设计设计。 我们的LibriSpeopech 数据集和 RNNNNNN- Texters 损失。我们显示, 13M 参数模型在使用3.6% 和3.6% 的外部测试中比creformaxy preform pressional press pressional press real pressal-% n.