This paper presents ByteSing, a Chinese singing voice synthesis (SVS) system based on duration allocated Tacotron-like acoustic models and WaveRNN neural vocoders. Different from the conventional SVS models, the proposed ByteSing employs Tacotron-like encoder-decoder structures as the acoustic models, in which the CBHG models and recurrent neural networks (RNNs) are explored as encoders and decoders respectively. Meanwhile an auxiliary phoneme duration prediction model is utilized to expand the input sequence, which can enhance the model controllable capacity, model stability and tempo prediction accuracy. WaveRNN neural vocoders are also adopted as neural vocoders to further improve the voice quality of synthesized songs. Both objective and subjective experimental results prove that the SVS method proposed in this paper can produce quite natural, expressive and high-fidelity songs by improving the pitch and spectrogram prediction accuracy and the models using attention mechanism can achieve best performance.
翻译:本文介绍中方音频合成(SVSing)系统,这是中方歌声合成(SVS)系统,以所分配的音频模型和波音NNN神经电动电动器为根据,与传统的SVS模型不同,拟议中的ByteSing采用像Tactron的编码器解码器结构作为音频模型,分别作为编码器和解码器探索CBHG模型和经常性神经网络(RNN),同时使用辅助电话周期预测模型扩大输入序列,这可以提高模型控制能力、模型稳定性和节奏预测准确性。WaveRNNNNEvocer还被采纳为神经电动器,以进一步提高合成歌曲的声音质量。客观和主观的实验结果都证明,本文中提议的SVS方法通过改进声频和光谱预测准确度,可以产生非常自然、直观和高知觉的歌曲,使用注意机制的模型可以取得最佳效果。