We propose the first joint audio-video generation framework that brings engaging watching and listening experiences simultaneously, towards high-quality realistic videos. To generate joint audio-video pairs, we propose a novel Multi-Modal Diffusion model (i.e., MM-Diffusion), with two-coupled denoising autoencoders. In contrast to existing single-modal diffusion models, MM-Diffusion consists of a sequential multi-modal U-Net for a joint denoising process by design. Two subnets for audio and video learn to gradually generate aligned audio-video pairs from Gaussian noises. To ensure semantic consistency across modalities, we propose a novel random-shift based attention block bridging over the two subnets, which enables efficient cross-modal alignment, and thus reinforces the audio-video fidelity for each other. Extensive experiments show superior results in unconditional audio-video generation, and zero-shot conditional tasks (e.g., video-to-audio). In particular, we achieve the best FVD and FAD on Landscape and AIST++ dancing datasets. Turing tests of 10k votes further demonstrate dominant preferences for our model. The code and pre-trained models can be downloaded at https://github.com/researchmm/MM-Diffusion.
翻译:我们建议第一个联合视听生成框架,将关注和倾听同时带来互动,形成高质量的现实视频。为了产生联合音像配对,我们提议一个新型多式多式传播模式(即MM-Difulation),配有双元拆解自动读器。与现有的单一模式扩散模式相比,MM-Difmulation包含一个按顺序排列的多式U-Net,用于设计联合拆卸过程。两个用于视听的子网,用于学习从高山噪音中逐渐生成匹配的音像配对。为了确保不同模式之间的语义一致性,我们提议一个新型的多式多式传播模型(即MMM-D-Dif-Dif),在两个子网上建立基于关注点的随机转移小块连接,使高效的跨模式一致,从而增强对彼此的视听忠实性。广泛的实验展示了无条件的音像生成和零截图的有条件任务(例如视频到奥迪奥迪奥迪奥迪奥)的优异结果。特别是,我们要在LAVD和AISD++VAGlex massimmission spress 测试10 10 asub asub surviewdal