The recent success of the generative model shows that leveraging the multi-modal embedding space can manipulate an image using text information. However, manipulating an image with other sources rather than text, such as sound, is not easy due to the dynamic characteristics of the sources. Especially, sound can convey vivid emotions and dynamic expressions of the real world. Here, we propose a framework that directly encodes sound into the multi-modal (image-text) embedding space and manipulates an image from the space. Our audio encoder is trained to produce a latent representation from an audio input, which is forced to be aligned with image and text representations in the multi-modal embedding space. We use a direct latent optimization method based on aligned embeddings for sound-guided image manipulation. We also show that our method can mix text and audio modalities, which enrich the variety of the image modification. We verify the effectiveness of our sound-guided image manipulation quantitatively and qualitatively. We also show that our method can mix different modalities, i.e., text and audio, which enrich the variety of the image modification. The experiments on zero-shot audio classification and semantic-level image classification show that our proposed model outperforms other text and sound-guided state-of-the-art methods.
翻译:基因模型最近的成功表明,利用多模式嵌入空间可以利用文本信息操控图像。 但是,由于源的动态特性,用其它来源而不是文字(例如声音)来操纵图像并非易事。 声音可以传递生动的情感和真实世界的动态表达方式。 在这里, 我们提议一个框架, 将声音直接编码成多模式( 图像文本) 嵌入空间, 并操作空间图像。 我们的音频编码器经过培训, 可以从音频输入中生成潜在代表, 而这被迫与多模式嵌入空间中的图像和文字表达方式相匹配。 我们使用一种基于对齐嵌入的图像( 如声音等)的直接潜在优化方法。 我们还显示, 我们的方法可以混合文本和音频模式, 从而丰富图像修改的种类。 我们还表明, 我们的方法可以混合不同的方式, 即文字和音频, 从而丰富了图像修改的种类。 在零版制音频分类和音频图像等级上, 实验显示我们的拟议音频格式和音频图像等级, 演示了其他的模型和音频格式, 显示其他的变制文本等级, 显示其他的图像等级。