Citrinet is an end-to-end convolutional Connectionist Temporal Classification (CTC) based automatic speech recognition (ASR) model. To capture local and global contextual information, 1D time-channel separable convolutions combined with sub-word encoding and squeeze-and-excitation (SE) are used in Citrinet, making the whole architecture to be as deep as including 23 blocks with 235 convolution layers and 46 linear layers. This pure convolutional and deep architecture makes Critrinet relatively slow at convergence. In this paper, we propose to introduce multi-head attentions together with feed-forward networks in the convolution module in Citrinet blocks while keeping the SE module and residual module unchanged. For speeding up, we remove 8 convolution layers in each attention-enhanced Citrinet block and reduce 23 blocks to 13. Experiments on the Japanese CSJ-500h and Magic-1600h dataset show that the attention-enhanced Citrinet with less layers and blocks and converges faster with lower character error rates than (1) Citrinet with 80\% training time and (2) Conformer with 40\% training time and 29.8\% model size.
翻译:内分泌物是一种基于端到端端连接时空分类(CTC)的自动语音识别模式。 为了获取本地和全球背景信息, 内分泌物中使用 1D 时间通道可分离的混凝土, 加上子词编码和挤压和抽查(SE), 使整个建筑的深度达到23个区块, 包括235个相交层和46个线性层。 这个纯革命性和深层的建筑使得Critrint在趋同时速度相对缓慢。 在本文中, 我们提议在Citrinet区进化模块中引入多头和进料转发网络, 同时保持SE模块和剩余模块不变。 为了加速速度, 我们清除每个备受注意的凝聚体块中的8个演动层, 并将23个区块缩小到13个, 实验日本CSJ- 500h 和 Magic-1600h 数据集显示, 受注意的化体积层和方块的体积小于1 80° 和29.8 模型。