Generating sound effects that humans want is an important topic. However, there are few studies in this area for sound generation. In this study, we investigate generating sound conditioned on a text prompt and propose a novel text-to-sound generation framework that consists of a text encoder, a Vector Quantized Variational Autoencoder (VQ-VAE), a decoder, and a vocoder. The framework first uses the decoder to transfer the text features extracted from the text encoder to a mel-spectrogram with the help of VQ-VAE, and then the vocoder is used to transform the generated mel-spectrogram into a waveform. We found that the decoder significantly influences the generation performance. Thus, we focus on designing a good decoder in this study. We begin with the traditional autoregressive decoder, which has been proved as a state-of-the-art method in previous sound generation works. However, the AR decoder always predicts the mel-spectrogram tokens one by one in order, which introduces the unidirectional bias and accumulation of errors problems. Moreover, with the AR decoder, the sound generation time increases linearly with the sound duration. To overcome the shortcomings introduced by AR decoders, we propose a non-autoregressive decoder based on the discrete diffusion model, named Diffsound. Specifically, the Diffsound predicts all of the mel-spectrogram tokens in one step and then refines the predicted tokens in the next step, so the best-predicted results can be obtained after several steps. Our experiments show that our proposed Diffsound not only produces better text-to-sound generation results when compared with the AR decoder but also has a faster generation speed, e.g., MOS: 3.56 \textit{v.s} 2.786, and the generation speed is five times faster than the AR decoder.
翻译:生成人类想要的音效 86 声音效果是一个重要主题。 但是, 这个区域没有多少关于声音生成的研究。 在此研究中, 我们调查生成声音的音质以文本提示为条件, 并提出由文本编码器( VQ- VAE ) 、 矢量量化变色自动coder (VQ- VAE) 、 解码器和vocoder 组成的新文字生成框架。 框架首先使用解码将从文本解析器中提取的文本特性转换成一个调频谱。 但是, 在 VQ- VAE 的帮助下, 将声音生成的调色素转换成一个调色素生成器, 我们发现脱色显示一个好的解码器( VQQ- VAE ), 我们从传统的自动递增缩缩缩图解变色器开始, 在先前的音效作品中, ARcoDOD( ) 总是以一个调序变色变色器预测出一个调调的调序, 而不是以一个调序变色变色的顺序显示一个调结果。