Recent work has explored using self-supervised learning (SSL) speech representations such as wav2vec2.0 as the representation medium in standard two-stage TTS, in place of conventionally used mel-spectrograms. It is however unclear which speech SSL is the better fit for TTS, and whether or not the performance differs between read and spontaneous TTS, the later of which is arguably more challenging. This study aims at addressing these questions by testing several speech SSLs, including different layers of the same SSL, in two-stage TTS on both read and spontaneous corpora, while maintaining constant TTS model architecture and training settings. Results from listening tests show that the 9th layer of 12-layer wav2vec2.0 (ASR finetuned) outperforms other tested SSLs and mel-spectrogram, in both read and spontaneous TTS. Our work sheds light on both how speech SSL can readily improve current TTS systems, and how SSLs compare in the challenging generative task of TTS. Audio examples can be found at https://www.speech.kth.se/tts-demos/ssr_tts
翻译:最近的工作探索了使用自我监督的学习语言表达方式,如 wav2vec2.0 作为标准二阶段TTS的代言介质,取代传统使用的Mel-spectrographs。然而,尚不清楚SLS的哪个语言更适合TTS,以及读和自发TTS的性能是否不同,后者的后两者的性能是否不同。本研究的目的是通过在读和自发两个阶段TTS的两阶段TTS中测试SLS的几种语言,包括同一SSL的不同层次,在读和自发两个阶段TSCorora上测试STS,同时保持恒定 TTS的模型结构和培训设置。收听测试结果显示,12层 wav2vec2.0(ASR pressioned)第九层在阅读和自发 TTSTSTS中都比其他经过测试的SLS和Mel-spectrogragraph。我们的工作揭示了SLS如何随时改进当前的TS系统,以及SLS如何比较具有挑战性的TSTS的基因化任务。在https://www.speetr.ks/s/s.s.s.s.s.s.s.s/ts.s.s/s/s.s.s.s.</s>