With the development of automatic speech recognition (ASR) and text-to-speech (TTS) technology, high-quality voice conversion (VC) can be achieved by extracting source content information and target speaker information to reconstruct waveforms. However, current methods still require improvement in terms of inference speed. In this study, we propose a lightweight VITS-based VC model that uses the HuBERT-Soft model to extract content information features without speaker information. Through subjective and objective experiments on synthesized speech, the proposed model demonstrates competitive results in terms of naturalness and similarity. Importantly, unlike the original VITS model, we use the inverse short-time Fourier transform (iSTFT) to replace the most computationally expensive part. Experimental results show that our model can generate samples at over 5000 kHz on the 3090 GPU and over 250 kHz on the i9-10900K CPU, achieving competitive speed for the same hardware configuration.
翻译:随着自动语音识别(ASR)和文本到语音(TTS)技术的开发,高质量的语音转换(VC)可以通过提取源内容信息和目标扬声器信息实现,以重建波形。然而,目前的方法仍然需要在推论速度方面加以改进。在本研究中,我们提议了一个基于以VITS为基础的轻量级VC模型,该模型使用HuBERT-Soft 模型在不提供语音信息的情况下提取内容信息特征。通过对合成语音进行主观和客观的实验,拟议的模型在自然性和相似性方面显示出竞争性的结果。 重要的是,与最初的VITS模型不同,我们使用逆向短时间的四面变换(iSTFT)来取代计算成本最高的部分。实验结果表明,我们的模型可以在3,090 GPU和i9-10900K CPU上生成超过5,000千赫的样本,在i9-10000K CPU上生成超过250千赫的样本,在相同的硬件配置上实现竞争速度。