Automatic speech recognition (ASR) systems developed in recent years have shown promising results with self-attention models (e.g., Transformer and Conformer), which are replacing conventional recurrent neural networks. Meanwhile, a structured state space model (S4) has been recently proposed, producing promising results for various long-sequence modeling tasks, including raw speech classification. The S4 model can be trained in parallel, same as the Transformer model. In this study, we applied S4 as a decoder for ASR and text-to-speech (TTS) tasks by comparing it with the Transformer decoder. For the ASR task, our experimental results demonstrate that the proposed model achieves a competitive word error rate (WER) of 1.88%/4.25% on LibriSpeech test-clean/test-other set and a character error rate (CER) of 3.80%/2.63%/2.98% on the CSJ eval1/eval2/eval3 set. Furthermore, the proposed model is more robust than the standard Transformer model, particularly for long-form speech on both the datasets. For the TTS task, the proposed method outperforms the Transformer baseline.
翻译:近年来开发的自动语音识别(ASR)系统在取代常规的经常性神经网络的自控模型(如变换器和变换器)中显示了令人乐观的结果。与此同时,最近提出了结构化的国家空间模型(S4),为包括原始语音分类在内的各种长期序列建模任务产生了有希望的结果。S4模型可以平行地进行训练,与变换器模型一样。在本研究中,我们应用S4作为ASR和文本到语音(TTS)的解码器任务,将其与变换器解码器的解码器进行比较。关于变换器的任务,我们的实验结果显示,拟议的模型在LibriSpeech测试-清洁/测试-其他装置上实现了1.88%/4.25%的竞争性字差率,而在CSJ eval1/eval2/eval3 设置的字符错误率(CER)上实现了3.80%/2.63.98%的平行训练。此外,拟议的模型比标准变换器模型更坚固,特别是在两个数据变换器的长式发言上。关于任务格式的方法。