Recently, speech representation learning has improved many speech-related tasks such as speech recognition, speech classification, and speech-to-text translation. However, all the above tasks are in the direction of speech understanding, but for the inverse direction, speech synthesis, the potential of representation learning is yet to be realized, due to the challenging nature of generating high-quality speech. To address this problem, we propose our framework, Alignment-Aware Acoustic-Text Pretraining (A$^3$T), which reconstructs masked acoustic signals with text input and acoustic-text alignment during training. In this way, the pretrained model can generate high quality reconstructed spectrogram, which can be applied to the speech editing and unseen speaker TTS directly. Experiments show A$^3$T outperforms SOTA models on speech editing, and improves multi-speaker speech synthesis without the external speaker verification model.
翻译:最近,语言代表学习改善了许多与语言有关的任务,如语音识别、语音分类和语音对文本翻译等,然而,上述所有任务都是面向语音理解的方向,但对于反方向、语音合成而言,由于生成高质量语言具有挑战性,代表性学习的潜力尚未实现。为解决这一问题,我们提议了我们的框架,即“对齐-Aware Avoice Acoucistic-Text Pretraining”(A$3$T),该框架用文字输入和语音文本校正来重建隐蔽的音响信号。这样,预先培训的模型可以产生高质量的重建光谱,可以直接应用于语音编辑和看不见的演讲者TTS。实验显示A$3$T在语音编辑上比SOTA模型更符合SOTA模式,并在没有外部演讲者校验模型的情况下改进多语者语音合成。