Unpaired image-to-image translation aims to find a mapping between the source domain and the target domain. To alleviate the problem of the lack of supervised labels for the source images, cycle-consistency based methods have been proposed for image structure preservation by assuming a reversible relationship between unpaired images. However, this assumption only uses limited correspondence between image pairs. Recently, contrastive learning (CL) has been used to further investigate the image correspondence in unpaired image translation by using patch-based positive/negative learning. Patch-based contrastive routines obtain the positives by self-similarity computation and recognize the rest patches as negatives. This flexible learning paradigm obtains auxiliary contextualized information at a low cost. As the negatives own an impressive sample number, with curiosity, we make an investigation based on a question: are all negatives necessary for feature contrastive learning? Unlike previous CL approaches that use negatives as much as possible, in this paper, we study the negatives from an information-theoretic perspective and introduce a new negative Pruning technology for Unpaired image-to-image Translation (PUT) by sparsifying and ranking the patches. The proposed algorithm is efficient, flexible and enables the model to learn essential information between corresponding patches stably. By putting quality over quantity, only a few negative patches are required to achieve better results. Lastly, we validate the superiority, stability, and versatility of our model through comparative experiments.
翻译:在源域和目标域之间找到未受监督的图像到图像翻译,目的是寻找源域和目标域之间的映像图。为了缓解源图图像缺乏受监督标签的问题,建议了基于周期一致性的方法,通过假设未受尊重图像之间的可逆关系来保存图像结构。然而,这一假设仅仅使用图像配对之间的有限对应关系。最近,对比学习(CL)被用来进一步调查未受监督的图像翻译中的图像对应性,方法是使用基于补丁的正反面学习。基于补丁的对比常规通过自我偏差计算获得正数,并承认休息补丁为负数。这种灵活的学习模式以低成本获得辅助背景信息。由于负数拥有一个令人印象深刻的抽样数字,因此我们根据一个问题进行调查:特征对比学习需要所有的负数都是必需的?与以前使用负数的CL方法不同,我们从信息-理论学角度来研究负面,并引入新的负比值技术,为未受监督的图像到比较性缩补补补补补的补分级(PUT),通过学习基本的平整和定序,通过排序,使基本的平整和平整。