Singing voice synthesis is a generative task that involves multi-dimensional control of the singing model, including lyrics, pitch, and duration, and includes the timbre of the singer and singing skills such as vibrato. In this paper, we proposed SU-net for singing voice synthesis named SUSing. Synthesizing singing voice is treated as a translation task between lyrics and music score and spectrum. The lyrics and music score information is encoded into a two-dimensional feature representation through the convolution layer. The two-dimensional feature and its frequency spectrum are mapped to the target spectrum in an autoregressive manner through a SU-net network. Within the SU-net the stripe pooling method is used to replace the alternate global pooling method to learn the vertical frequency relationship in the spectrum and the changes of frequency in the time domain. The experimental results on the public dataset Kiritan show that the proposed method can synthesize more natural singing voices.
翻译:歌唱合成是一项基因化任务,涉及歌唱模式的多维控制,包括歌词、歌声和持续时间,包括歌唱和歌唱技能的细角,如vibrato。在本文中,我们提议用SU-net来合成歌唱声音,名为SUSing。合成歌唱声音被视为歌词和音乐分数和频谱之间的翻译任务。歌词和音乐分数信息被编码成通过卷发层的二维特征。二维特征及其频谱通过SU-net网络以自动递增的方式绘制成目标频谱。在SU-net内部,条状集合方法被用来取代替代替代全球集合方法,以学习频谱的垂直频率关系和时间域频率的变化。公共数据集Kiritan的实验结果显示,拟议方法可以更自然地合成歌声。