Deep hashing methods have shown great retrieval accuracy and efficiency in large-scale image retrieval. How to optimize discrete hash bits is always the focus in deep hashing methods. A common strategy in these methods is to adopt an activation function, e.g. $\operatorname{sigmoid}(\cdot)$ or $\operatorname{tanh}(\cdot)$, and minimize a quantization loss to approximate discrete values. However, this paradigm may make more and more hash bits stuck into the wrong saturated area of the activation functions and never escaped. We call this problem "Dead Bits Problem~(DBP)". Besides, the existing quantization loss will aggravate DBP as well. In this paper, we propose a simple but effective gradient amplifier which acts before activation functions to alleviate DBP. Moreover, we devise an error-aware quantization loss to further alleviate DBP. It avoids the negative effect of quantization loss based on the similarity between two images. The proposed gradient amplifier and error-aware quantization loss are compatible with a variety of deep hashing methods. Experimental results on three datasets demonstrate the efficiency of the proposed gradient amplifier and the error-aware quantization loss.
翻译:深 hash 方法在大型图像检索中显示极高的检索精确度和效率。 如何优化离散散散散列点始终是深散列方法的重点。 这些方法的共同策略是采用激活功能, 例如$\operatorname{sigmoid}(\cdot)$ 或$\operatorname{tanh}(\cddot)$, 并且将量化损失降到接近离散值的程度最小化。 但是, 这个模式可能会让越来越多的散列位被困在激活功能的错误饱和区, 并且永远不会逃脱。 我们称之为问题“ Dead Bits Corruple~(DBP) ” 。 此外, 目前存在的定量损失将会加剧 DBP。 在本文中, 我们提出一个简单有效的梯度放大器, 在激活功能之前作用减轻 DBPP。 此外, 我们设计一个误觉量化损失以进一步减轻 DBPP 。 这个模式可以避免基于两种图像的相似度的二次定量损失的负面效果。 拟议的梯度放大放大放大器和误觉测测度损失的梯度结果显示 。