This paper presents Sinsy, a deep neural network (DNN)-based singing voice synthesis (SVS) system. In recent years, DNNs have been utilized in statistical parametric SVS systems, and DNN-based SVS systems have demonstrated better performance than conventional hidden Markov model-based ones. SVS systems are required to synthesize a singing voice with pitch and timing that strictly follow a given musical score. Additionally, singing expressions that are not described on the musical score, such as vibrato and timing fluctuations, should be reproduced. The proposed system is composed of four modules: a time-lag model, a duration model, an acoustic model, and a vocoder, and singing voices can be synthesized taking these characteristics of singing voices into account. To better model a singing voice, the proposed system incorporates improved approaches to modeling pitch and vibrato and better training criteria into the acoustic model. In addition, we incorporated PeriodNet, a non-autoregressive neural vocoder with robustness for the pitch, into our systems to generate a high-fidelity singing voice waveform. Moreover, we propose automatic pitch correction techniques for DNN-based SVS to synthesize singing voices with correct pitch even if the training data has out-of-tune phrases. Experimental results show our system can synthesize a singing voice with better timing, more natural vibrato, and correct pitch, and it can achieve better mean opinion scores in subjective evaluation tests.
翻译:本文展示了Sinsy(一个深神经网络(DNN),一个基于深度神经网络(DNNN)的音响合成系统。近年来,DNNs在统计参数SVS系统中被使用,而基于DNN的SVS系统比传统的隐藏的Markov模型系统表现得更好。SVS系统需要将歌声合成出一个严格按照给定音乐分的声调和时间标准。此外,应复制音乐评分上没有描述的歌词表达式,如振动和时间波动。拟议系统由四个模块组成:一个时差模型、一个持续期模型、一个声学模型和一个vocoder,歌声声音可以结合歌唱声音的这些特点而合成。为了更好地模拟歌唱声音,拟议的系统需要将改进的歌声、振动和更好的培训标准纳入音调模型中。此外,我们把“TheudNet”,一个不偏向性音调的音调调调调调调,在我们的系统中产生一种高纤维的歌音调模型,甚至歌唱的模型。此外,我们提出一个自动的音调校校校校校的校正校正校校校校校的校校校校校制,如果能测试,可以显示为D的音校正的音校制,可以显示为DN的音校制。