GAN vocoders are currently one of the state-of-the-art methods for building high-quality neural waveform generative models. However, most of their architectures require dozens of billion floating-point operations per second (GFLOPS) to generate speech waveforms in samplewise manner. This makes GAN vocoders still challenging to run on normal CPUs without accelerators or parallel computers. In this work, we propose a new architecture for GAN vocoders that mainly depends on recurrent and fully-connected networks to directly generate the time domain signal in framewise manner. This results in considerable reduction of the computational cost and enables very fast generation on both GPUs and low-complexity CPUs. Experimental results show that our Framewise WaveGAN vocoder achieves significantly higher quality than auto-regressive maximum-likelihood vocoders such as LPCNet at a very low complexity of 1.2 GFLOPS. This makes GAN vocoders more practical on edge and low-power devices.
翻译:GAN 电解码器目前是建立高质量神经波变异模型的最先进方法之一,然而,其大多数结构需要每秒数十亿个浮点操作(GFLOPS)才能以抽样方式生成语音波形。这使得GAN 电解码器仍然难以在没有加速器或平行计算机的情况下运行正常的CPU。在这项工作中,我们提议为GAN 电解码器建立一个新的结构,主要依靠经常和完全连接的网络,以框架化的方式直接生成时域信号。这导致计算成本大幅降低,使GPUs和低兼容度的CPUs能够快速生成。实验结果表明,我们的框架型WaveGAN 电码器比LPCNet这样的低复杂度为1.2GLFLLOPS的LPC最大类似电解码器的质量要高得多。这使得GAN 电解码器在边缘和低功率装置上更加实用。</s>