Non-parallel many-to-many voice conversion remains an interesting but challenging speech processing task. Recently, AutoVC, a conditional autoencoder based method, achieved excellent conversion results by disentangling the speaker identity and the speech content using information-constraining bottlenecks. However, due to the pure autoencoder training method, it is difficult to evaluate the separation effect of content and speaker identity. In this paper, a novel voice conversion framework, named $\boldsymbol T$ext $\boldsymbol G$uided $\boldsymbol A$utoVC(TGAVC), is proposed to more effectively separate content and timbre from speech, where an expected content embedding produced based on the text transcriptions is designed to guide the extraction of voice content. In addition, the adversarial training is applied to eliminate the speaker identity information in the estimated content embedding extracted from speech. Under the guidance of the expected content embedding and the adversarial training, the content encoder is trained to extract speaker-independent content embedding from speech. Experiments on AIShell-3 dataset show that the proposed model outperforms AutoVC in terms of naturalness and similarity of converted speech.
翻译:最近,AutoVC(AutoVC)这一基于有条件自动编码的自动编码器(AutoVC)方法,通过使用信息控制瓶颈将发言者的身份和语音内容脱钩,取得了极佳的转换结果;然而,由于纯自动编码器培训方法,很难评估内容和发言者身份的分离效应;在本文中,名为$\boldsymbol Text$\boldsymbol Text $\boldsymbol G$uidd $\boldsymbol A$Boldsymbol A$UTUVC(TGAVC)的新型语音转换框架,建议更有效地将内容和语音缩写法分离出来,因为根据文本抄写法设计的预期内容嵌入内容旨在指导语音内容的提取;此外,还采用对抗性培训来消除发言内容和声音身份估计内容嵌入部分中的发言者身份信息;在预期内容嵌入和对抗性培训的指导下,对内容进行了培训,将内容从演讲中提取与发言者独立的内容嵌入的内容。