Neural speech synthesis models have recently demonstrated the ability to synthesize high quality speech for text-to-speech and compression applications. These new models often require powerful GPUs to achieve real-time operation, so being able to reduce their complexity would open the way for many new applications. We propose LPCNet, a WaveRNN variant that combines linear prediction with recurrent neural networks to significantly improve the efficiency of speech synthesis. We demonstrate that LPCNet can achieve significantly higher quality than WaveRNN for the same network size and that high quality LPCNet speech synthesis is achievable with a complexity under 3 GFLOPS. This makes it easier to deploy neural synthesis applications on lower-power devices, such as embedded systems and mobile phones.
翻译:神经语音合成模型最近展示了为文本对语音和压缩应用合成高质量语言的能力,这些新模型往往需要强大的GPU实现实时操作,这样能够降低其复杂性将为许多新应用开辟道路。 我们提议了LPCNet,这是一个将线性预测与经常性神经网络相结合的WaverNNN变量,以显著提高语音合成的效率。我们证明LPCNet在相同的网络规模上可以实现比WaverNNNN高得多的质量,而且高质量的LPCNet语音合成在3 GFLOPS下是可以实现的。这使得在诸如嵌入系统和移动电话等低功率装置上安装神经合成应用更容易。