Recently, end-to-end Korean singing voice systems have been designed to generate realistic singing voices. However, these systems still suffer from a lack of robustness in terms of pronunciation accuracy. In this paper, we propose N-Singer, a non-autoregressive Korean singing voice system, to synthesize accurate and pronounced Korean singing voices in parallel. N-Singer consists of a Transformer-based mel-generator, a convolutional network-based postnet, and voicing-aware discriminators. It can contribute in the following ways. First, for accurate pronunciation, N-Singer separately models linguistic and pitch information without other acoustic features. Second, to achieve improved mel-spectrograms, N-Singer uses a combination of Transformer-based modules and convolutional network-based modules. Third, in adversarial training, voicing-aware conditional discriminators are used to capture the harmonic features of voiced segments and noise components of unvoiced segments. The experimental results prove that N-Singer can synthesize a natural singing voice in parallel with a more accurate pronunciation than the baseline model.
翻译:最近,韩国端到端歌声系统的设计旨在产生现实的歌声。然而,这些系统在发音准确性方面仍然缺乏强健性。在本文中,我们提议采用不偏激的韩国歌声系统NSinger(N-Singer ), 以同时合成准确和直译的韩国歌声。 N-Singer 由基于变异器的旋律器、基于连锁网络的后网和有声分析器组成。 它可以通过以下方式作出贡献。 首先, 准确的发音, N- Singer 将语言和音调信息模型分开, 而没有其它音学特征。 第二, 实现改进Mel-spectrogram, N-Singer 使用基于变异器的模块和基于革命网络的模块组合。 第三, 在对抗性培训中, 使用有声测试的有条件歧视器来捕捉声音段的调调和无声部分的噪音组成部分。 实验结果证明, N-Singer 能够将自然歌声声音声音与比基线更精确的模型合成。