One-shot voice cloning aims to transform speaker voice and speaking style in speech synthesized from a text-to-speech (TTS) system, where only a shot recording from the target speech can be used. Out-of-domain transfer is still a challenging task, and one important aspect that impacts the accuracy and similarity of synthetic speech is the conditional representations carrying speaker or style cues extracted from the limited references. In this paper, we present a novel one-shot voice cloning algorithm called Unet-TTS that has good generalization ability for unseen speakers and styles. Based on a skip-connected U-net structure, the new model can efficiently discover speaker-level and utterance-level spectral feature details from the reference audio, enabling accurate inference of complex acoustic characteristics as well as imitation of speaking styles into the synthetic speech. According to both subjective and objective evaluations of similarity, the new model outperforms both speaker embedding and unsupervised style modeling (GST) approaches on an unseen emotional corpus.
翻译:单发语音克隆的目的是改变从文本到语音(TTS)系统合成的语音语音和语音风格,只有从目标语句的镜头录音才能使用。 外音传输仍是一项艰巨的任务,影响合成语句准确性和相似性的一个重要方面是带有从有限参考中提取的语音或风格提示的有条件表述。 在本文中,我们展示了一种叫Unet-TTS的一发语音克隆算法,它对于隐蔽语句和风格具有很好的概括能力。基于一个跳过连接的 Unet 结构,新模型能够有效地从参考音频中发现语音水平和发音水平的光谱特征细节,从而能够准确推断复杂的声学特征以及模拟合成语句风格。根据对相似性的主观和客观评估,新模型在隐形情感物质上超越了语音嵌入和未受监督的风格建模(GST)方法。