Voice conversion technologies have been greatly improved in recent years with the help of deep learning, but their capabilities of producing natural sounding utterances in different conditions remain unclear. In this paper, we gave a thorough study of the robustness of known VC models. We also modified these models, such as the replacement of speaker embeddings, to further improve their performances. We found that the sampling rate and audio duration greatly influence voice conversion. All the VC models suffer from unseen data, but AdaIN-VC is relatively more robust. Also, the speaker embedding jointly trained is more suitable for voice conversion than those trained on speaker identification.
翻译:近年来,在深层学习的帮助下,声音转换技术大为改善,但在不同条件下产生自然声音的能力仍不明确;在本文件中,我们对已知VC模型的稳健性进行了彻底研究;我们还修改了这些模型,如更换语音嵌入器,以进一步提高其性能;我们发现抽样率和音频持续时间对语音转换有很大影响;所有VC模型都存在无法见的数据,但AdaIN-VC相对而言更为强大;此外,联合培训的发言者更适合语音转换,而不是那些接受过语音识别培训的发言者。