Recent successes suggest that an image can be manipulated by a text prompt, e.g., a landscape scene on a sunny day is manipulated into the same scene on a rainy day driven by a text input "raining". These approaches often utilize a StyleCLIP-based image generator, which leverages multi-modal (text and image) embedding space. However, we observe that such text inputs are often bottlenecked in providing and synthesizing rich semantic cues, e.g., differentiating heavy rain from rain with thunderstorms. To address this issue, we advocate leveraging an additional modality, sound, which has notable advantages in image manipulation as it can convey more diverse semantic cues (vivid emotions or dynamic expressions of the natural world) than texts. In this paper, we propose a novel approach that first extends the image-text joint embedding space with sound and applies a direct latent optimization method to manipulate a given image based on audio input, e.g., the sound of rain. Our extensive experiments show that our sound-guided image manipulation approach produces semantically and visually more plausible manipulation results than the state-of-the-art text and sound-guided image manipulation methods, which are further confirmed by our human evaluations. Our downstream task evaluations also show that our learned image-text-sound joint embedding space effectively encodes sound inputs.
翻译:最近的成功表明,图像可以被快速的文本操纵,例如阳光明日的风景景在雨天由文字输入“雨”驱动的雨天场景被操纵到同一个场景。这些方法经常使用基于StyleCLIP的图像生成器,该生成器利用多种模式(文本和图像)嵌入空间。然而,我们注意到,这些文本输入器往往在提供和合成丰富的语义提示方面受到瓶颈,例如,将大雨与暴风雨区分开来。为了解决这个问题,我们提倡利用一种额外的模式,即声音,它在图像操纵方面有着显著的优势,因为它能够传递比文本更多样化的语义提示(自然世界的动态情感或动态表达方式) 。在本文中,我们提出了一种新颖的方法,首先将图像-文字联合嵌入空间与声音(文本和图像)结合,并运用一种直接的潜在优化方法来根据音频输入(例如雨的声音)来操纵一个特定图像。我们广泛的实验显示,我们的声导图像操纵方法产生语义化和视觉操纵效果更清晰的操纵结果,比我们所了解的图像的方法有效地展示了我们所了解的图像的版本。