Text-guided image inpainting (TGII) aims to restore missing regions based on a given text in a damaged image. Existing methods are based on a strong vision encoder and a cross-modal fusion model to integrate cross-modal features. However, these methods allocate most of the computation to visual encoding, while light computation on modeling modality interactions. Moreover, they take cross-modal fusion for depth features, which ignores a fine-grained alignment between text and image. Recently, vision-language pre-trained models (VLPM), encapsulating rich cross-modal alignment knowledge, have advanced in most multimodal tasks. In this work, we propose a novel model for TGII by improving cross-modal alignment (CMA). CMA model consists of a VLPM as a vision-language encoder, an image generator and global-local discriminators. To explore cross-modal alignment knowledge for image restoration, we introduce cross-modal alignment distillation and in-sample distribution distillation. In addition, we employ adversarial training to enhance the model to fill the missing region in complicated structures effectively. Experiments are conducted on two popular vision-language datasets. Results show that our model achieves state-of-the-art performance compared with other strong competitors.
翻译:文本制成图像油漆 (TGII) 旨在根据损坏图像中的某一文本恢复缺失区域。 现有方法基于强大的视觉编码器和跨模式融合模型, 整合跨模式特征。 然而, 这些方法将大部分计算方法分配到视觉编码, 而在模型模式互动上进行光化计算 。 此外, 它们对深度特性采用跨模式融合, 忽略了文本和图像之间的细微调整。 最近, 视觉语言预培训模型(VLPM), 包含丰富的跨模式调整知识, 在大多数多式联运任务中取得了进步。 在这项工作中, 我们提出TGII的新模式, 通过改进跨模式协调( CMA) 。 CMA 模型包括作为视觉编码器的 VLPM 、 图像生成器和全球本地歧视器。 为了探索图像恢复的跨模式调整知识, 我们引入了跨模式调整蒸馏和内部分配蒸馏。 此外, 我们采用对抗性培训, 通过改进模型, 来填补缺失的区域的复杂图像结构中( CMA) 。 实验性模型 有效地展示了其它语言模型 。