Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propose AudioLDM, a TTA system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs with audio embedding while providing text embedding as a condition during sampling. By learning the latent representations of audio signals and their compositions without modeling the cross-modal relationship, AudioLDM is advantageous in both generation quality and computational efficiency. Trained on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA performance measured by both objective and subjective metrics (e.g., frechet distance). Moreover, AudioLDM is the first TTA system that enables various text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion. Our implementation and demos are available at https://audioldm.github.io.
翻译:文本到音频系统(TTA)最近因其根据文本描述合成普通音频的能力而引起注意。然而,以前在TTA中进行的研究的生成质量有限,而且计算成本高。在本研究中,我们提议AudioLDM,这是一个建立在潜在空间之上的TTA系统,用于从对比式语言-音频预培训(CLAP)潜伏中学习连续的音频表达方式。预先培训的CLAP模型使我们能够用音频嵌入方式培训LDMs,同时作为取样时的一种条件提供文本嵌入。通过学习音频信号及其组成的潜在表达方式,而不模拟跨模式关系,AudioLDM在生成质量和计算效率方面都具有优势。用一个单一的GPU,AudiodalLDM能够以客观和主观测量的度度量度(例如,frechet,距离)测量到最新技术表现。此外,音频LDMDM是第一个允许以零发方式进行各种文字导音频操纵(例如风格传输)的TTTA系统。我们的执行和演示可在 https://toliobs.alogyogiogiod.s.