Wav2vec 2.0 is an end-to-end framework of self-supervised learning for speech representation that is successful in automatic speech recognition (ASR), but most of the work on the topic has been developed with a single language: English. Therefore, it is unclear whether the self-supervised framework is effective in recognizing other languages with different writing systems, such as Korean which uses the Hangul having a unique writing system. In this paper, we present K-Wav2Vec 2.0, which is a modified version of Wav2vec 2.0 designed for Korean automatic speech recognition by exploring and optimizing various factors of the original Wav2vec 2.0. In fine-tuning, we propose a multi-task hierarchical architecture to reflect the Korean writing structure. Moreover, a joint decoder is applied to alleviate the problem of words existing outside of the vocabulary. In pre-training, we attempted the cross-lingual transfer of the pre-trained model by further pre-training the English Wav2vec 2.0 on a Korean dataset, considering limited resources. Our experimental results demonstrate that the proposed method yields the best performance on both Korean ASR datasets: Ksponspeech (a large-scale Korean speech corpus) and Clovacall (a call-based dialog corpus). Further pre-training is also effective in language adaptation, leading to large improvements without additional data.
翻译:Wav2vec 2. 0是一个自我监督的语音代表学习的端到端框架,在自动语音识别方面是成功的,但关于这一主题的大部分工作都是用一种单一语言:英语来开发的。因此,尚不清楚自我监督的框架是否有效地承认了使用不同书写系统的其他语言,例如使用韩文的独特书写系统的韩国语。在本文中,我们介绍了K-Wav2Vec 2.0,这是为韩国自动语音识别设计的Wav2vec 2.0的修改版本,它通过探索和优化原Wav2vec 2.0的各种因素来设计。在微调中,我们提出了一个反映韩国书写结构的多任务等级结构。此外,一个联合解码器用于缓解词汇外现有语言的问题。在培训前,我们试图通过在韩国数据集上进一步预先培训英语Wav2vec 2.0,考虑到资源有限。我们的实验结果表明,拟议的方法在韩国语音定位前的大规模数据库中提高了最佳性(Kponspea-lacreadal) 。