The development of unsupervised hashing is advanced by the recent popular contrastive learning paradigm. However, previous contrastive learning-based works have been hampered by (1) insufficient data similarity mining based on global-only image representations, and (2) the hash code semantic loss caused by the data augmentation. In this paper, we propose a novel method, namely Weighted Contrative Hashing (WCH), to take a step towards solving these two problems. We introduce a novel mutual attention module to alleviate the problem of information asymmetry in network features caused by the missing image structure during contrative augmentation. Furthermore, we explore the fine-grained semantic relations between images, i.e., we divide the images into multiple patches and calculate similarities between patches. The aggregated weighted similarities, which reflect the deep image relations, are distilled to facilitate the hash codes learning with a distillation loss, so as to obtain better retrieval performance. Extensive experiments show that the proposed WCH significantly outperforms existing unsupervised hashing methods on three benchmark datasets.
翻译:最近流行的对比式学习模式推动了未经监督的散列的开发。然而,以往的对比式学习工程受到以下因素的阻碍:(1) 以全球唯一的图像显示为基础的数据相似性挖掘不足,(2) 数据增强造成的散记代码语义损失。在本文中,我们提出了一个新颖的方法,即“加权并存散”(WCH),以朝着解决这两个问题迈出一步。我们引入了一个全新的相互关注模块,以缓解在反向增强期间图像结构缺失造成的网络特征信息不对称问题。此外,我们探索图像之间精细的语义关系,即我们将图像分为多个补丁和计算补丁之间的相似性。综合加权相似性反映了深刻的图像关系,被蒸馏为便利散记分码学习与蒸馏损失,以便获得更好的检索性能。广泛的实验表明,拟议的WCH明显超出了三个基准数据集上现有的未经监督的集方法。