Free-form text prompts allow users to describe their intentions during image manipulation conveniently. Based on the visual latent space of StyleGAN[21] and text embedding space of CLIP[34], studies focus on how to map these two latent spaces for text-driven attribute manipulations. Currently, the latent mapping between these two spaces is empirically designed and confines that each manipulation model can only handle one fixed text prompt. In this paper, we propose a method named Free-Form CLIP (FFCLIP), aiming to establish an automatic latent mapping so that one manipulation model handles free-form text prompts. Our FFCLIP has a cross-modality semantic modulation module containing semantic alignment and injection. The semantic alignment performs the automatic latent mapping via linear transformations with a cross attention mechanism. After alignment, we inject semantics from text prompt embeddings to the StyleGAN latent space. For one type of image (e.g., `human portrait'), one FFCLIP model can be learned to handle free-form text prompts. Meanwhile, we observe that although each training text prompt only contains a single semantic meaning, FFCLIP can leverage text prompts with multiple semantic meanings for image manipulation. In the experiments, we evaluate FFCLIP on three types of images (i.e., `human portraits', `cars', and `churches'). Both visual and numerical results show that FFCLIP effectively produces semantically accurate and visually realistic images. Project page: https://github.com/KumapowerLIU/FFCLIP.
翻译:自由格式文本提示让用户在方便的图像操作中描述其意图。 根据StyleGAN[21] 和CLIP[34] 的嵌入空间的视觉潜伏空间, 研究的重点是如何为文本驱动的属性操作绘制这两个潜在空间。 目前, 这两个空间之间的潜在映射是实验性设计的, 限制每个操作模型只能处理一个固定文本提示。 在本文中, 我们提议了一个名为 Free- Form CLIP (FF CLIP) 的方法, 目的是建立自动潜伏映射, 以便让一个操作模型处理自由格式文本提示。 我们 FFFCLIP 拥有一个包含语义校正校正校正和输入的语义调模块。 语义校正对通过一个交叉关注机制的线性转换进行自动潜在映射。 在校正后, 我们从文本导出音导出精度嵌嵌入StyGAN 潜在空间。 对于一种图像( 例如, 人类肖像), 一个 FFFCLIP 模型可以学习如何处理自由格式文本提示。 同时, 我们观察显示每个直径LFFFFFFFFFCS 的图像意味着我们运行的文本的图像。