Diffusion-based image translation guided by semantic texts or a single target image has enabled flexible style transfer which is not limited to the specific domains. Unfortunately, due to the stochastic nature of diffusion models, it is often difficult to maintain the original content of the image during the reverse diffusion. To address this, here we present a novel diffusion-based unsupervised image translation method using disentangled style and content representation. Specifically, inspired by the splicing Vision Transformer, we extract intermediate keys of multihead self attention layer from ViT model and used them as the content preservation loss. Then, an image guided style transfer is performed by matching the [CLS] classification token from the denoised samples and target image, whereas additional CLIP loss is used for the text-driven style transfer. To further accelerate the semantic change during the reverse diffusion, we also propose a novel semantic divergence loss and resampling strategy. Our experimental results show that the proposed method outperforms state-of-the-art baseline models in both text-guided and image-guided translation tasks.
翻译:以语义文本或单一目标图像为指导的基于传播的图像翻译,使得不局限于特定域的灵活风格传输成为了不局限于特定域的灵活风格传输。 不幸的是,由于扩散模型的随机性,在反向扩散过程中往往难以保持图像的原始内容。 为此,我们在这里展示了一种新的基于扩散的、不受监督的图像翻译方法,使用了分解的样式和内容表达方式。具体地说,我们根据闪烁式视野变异器,从维特模型中提取多头自关注层的中间键,并将它们用作内容保存损失。然后,图像引导样式传输是通过匹配已解调样品和目标图像的[CLS]分类符号来进行的,而额外的CLIP损失被用于文本驱动样式传输。为了在反向扩散过程中进一步加速语义变化,我们还提议了一个新型的语义差异损失和重新标注战略。我们的实验结果表明,拟议的方法在文本制导导和图像制导导译过程中都不符合最新基准模型。