Score-based generative models and diffusion probabilistic models have been successful at generating high-quality samples in continuous domains such as images and audio. However, due to their Langevin-inspired sampling mechanisms, their application to discrete and sequential data has been limited. In this work, we present a technique for training diffusion models on sequential data by parameterizing the discrete domain in the continuous latent space of a pre-trained variational autoencoder. Our method is non-autoregressive and learns to generate sequences of latent embeddings through the reverse process and offers parallel generation with a constant number of iterative refinement steps. We apply this technique to modeling symbolic music and show strong unconditional generation and post-hoc conditional infilling results compared to autoregressive language models operating over the same continuous embeddings.
翻译:基于分数的遗传化模型和传播概率模型成功地在图像和音频等连续领域生成了高质量的样本,然而,由于其由Langevin启发的取样机制,这些样本对离散和相继数据的应用有限;在这项工作中,我们提出了一个技术,通过将离散域在预先训练的变异自动编码器连续潜藏空间中的参数化,对相继数据的传播模型进行培训。我们的方法是非反向的,并学习通过反向过程生成潜伏嵌入序列,并提供平行生成,并不断采取一系列迭接的完善步骤。我们应用这一技术来模拟象征性音乐,并展示强大的无条件生成和后热附有条件的填充结果,而不是在相同的连续嵌入中运行的自动递增语言模型。