This paper develops a unified framework for image-to-image translation based on conditional diffusion models and evaluates this framework on four challenging image-to-image translation tasks, namely colorization, inpainting, uncropping, and JPEG restoration. Our simple implementation of image-to-image diffusion models outperforms strong GAN and regression baselines on all tasks, without task-specific hyper-parameter tuning, architecture customization, or any auxiliary loss or sophisticated new techniques needed. We uncover the impact of an L2 vs. L1 loss in the denoising diffusion objective on sample diversity, and demonstrate the importance of self-attention in the neural architecture through empirical studies. Importantly, we advocate a unified evaluation protocol based on ImageNet, with human evaluation and sample quality scores (FID, Inception Score, Classification Accuracy of a pre-trained ResNet-50, and Perceptual Distance against original images). We expect this standardized evaluation protocol to play a role in advancing image-to-image translation research. Finally, we show that a generalist, multi-task diffusion model performs as well or better than task-specific specialist counterparts. Check out https://diffusion-palette.github.io for an overview of the results.
翻译:本文根据有条件的传播模型,为图像到图像翻译制定一个统一框架,并评价关于四种具有挑战性的图像到图像翻译任务,即色彩化、油漆、不作草图和JPEG的恢复的这一框架。我们简单的图像到图像扩散模型的实施,超越了所有任务上强大的GAN和回归基线,没有具体任务的超参数调整、结构定制、或任何附带损失或所需的尖端新技术。我们发现在排除图像到图像传播目标中对样本多样性的影响,并通过经验研究表明在神经结构中自我关注的重要性。重要的是,我们倡导基于图像网络的统一评价协议,并有人类评估和样本质量分数(FID、感知分、预先培训的ResNet-50的分类准确性、以及针对原始图像的 Perceptual距离)。我们期望这一标准化评价协议在推进图像到图像转换研究方面发挥作用。最后,我们展示了一种通用的、多塔克传播模型,作为良好或更好的任务/特定结果的概览。