Audio editing is applicable for various purposes, such as adding background sound effects, replacing a musical instrument, and repairing damaged audio. Recently, some diffusion-based methods achieved zero-shot audio editing by using a diffusion and denoising process conditioned on the text description of the output audio. However, these methods still have some problems: 1) they have not been trained on editing tasks and cannot ensure good editing effects; 2) they can erroneously modify audio segments that do not require editing; 3) they need a complete description of the output audio, which is not always available or necessary in practical scenarios. In this work, we propose AUDIT, an instruction-guided audio editing model based on latent diffusion models. Specifically, AUDIT has three main design features: 1) we construct triplet training data (instruction, input audio, output audio) for different audio editing tasks and train a diffusion model using instruction and input (to be edited) audio as conditions and generating output (edited) audio; 2) it can automatically learn to only modify segments that need to be edited by comparing the difference between the input and output audio; 3) it only needs edit instructions instead of full target audio descriptions as text input. AUDIT achieves state-of-the-art results in both objective and subjective metrics for several audio editing tasks (e.g., adding, dropping, replacement, inpainting, super-resolution). Demo samples are available at https://audit-demo.github.io/.
翻译:音频编辑能够应用于不同的场景,例如添加背景音效、更换乐器、修复受损的音频等。近年来,一些基于扩散的方法通过使用扩散和去噪过程来实现零议音频编辑,这些过程是基于输出音频的文本描述而条件化的。然而,这些方法仍存在一些问题:1)它们没有在编辑任务上进行过训练,无法保证优秀的编辑效果; 2)它们可能会错误地修改不需要编辑的音频段; 3)它们需要完整的输出音频描述,这在实际场景中并不总是可用或必要。在本文中,我们提出基于潜在扩散模型的AUDIT,一种按照说明进行音频编辑的模型。具体而言,AUDIT具有三个主要的设计特征:1)我们为不同的音频编辑任务构建三元训练数据(说明、输入音频、输出音频),使用说明和输入(待编辑)音频作为条件训练扩散模型,并生成输出(编辑后)音频;2)AUDIT可以通过比较输入和输出音频之间的差异自动学习只修改需要编辑的段; 3)它只需要编辑说明而不是完整的目标音频描述作为文本输入。 AUDIT实现了数种音频编辑任务(例如添加、删除、替换、修复和超分辨率),在客观和主观度量标准方面均取得了最先进的结果。演示样本可在https://audit-demo.github.io/上获得。