Cross-modal hashing, favored for its effectiveness and efficiency, has received wide attention to facilitating efficient retrieval across different modalities. Nevertheless, most existing methods do not sufficiently exploit the discriminative power of semantic information when learning the hash codes, while often involving time-consuming training procedure for handling the large-scale dataset. To tackle these issues, we formulate the learning of similarity-preserving hash codes in terms of orthogonally rotating the semantic data so as to minimize the quantization loss of mapping such data to hamming space, and propose an efficient Fast Discriminative Discrete Hashing (FDDH) approach for large-scale cross-modal retrieval. More specifically, FDDH introduces an orthogonal basis to regress the targeted hash codes of training examples to their corresponding semantic labels, and utilizes "-dragging technique to provide provable large semantic margins. Accordingly, the discriminative power of semantic information can be explicitly captured and maximized. Moreover, an orthogonal transformation scheme is further proposed to map the nonlinear embedding data into the semantic subspace, which can well guarantee the semantic consistency between the data feature and its semantic representation. Consequently, an efficient closed form solution is derived for discriminative hash code learning, which is very computationally efficient. In addition, an effective and stable online learning strategy is presented for optimizing modality-specific projection functions, featuring adaptivity to different training sizes and streaming data. The proposed FDDH approach theoretically approximates the bi-Lipschitz continuity, runs sufficiently fast, and also significantly improves the retrieval performance over the state-of-the-art methods. The source code is released at: https://github.com/starxliu/FDDH.


翻译:有利于其效力和效率的跨式杂交杂交法已经得到广泛关注,促进不同模式的高效检索,然而,大多数现有方法在学习散列码时,并未充分利用语义信息具有的歧视性力量,而经常涉及处理大型数据集的耗时培训程序。为了解决这些问题,我们开发了类似性-保存散列码的学习方法,以便尽可能减少将此类数据映射到发芽空间的四分化损失,并提出一种高效的快速分流分解分解调色素(DFDH)方法,用于大规模跨式检索。更具体地说,DFDH采用一个或多式的基点基础,将目标的集成培训代码重新引入相应的语义标签,并使用“低压技术”提供可变的大型静态边缘。因此,可以明确获取和最大化精度信息流的辨别性能。此外,一个或多式的流流式快速递解变法,用于绘制非线性流的流数据流,从而保证了内部演化数据流流的稳定性,因此,因此,可以将一个固定式的流流流进式的流数据流流流数据流流流流流数据流流流流和流数据流数据流进进进进化,从而可以保证了。

0
下载
关闭预览

相关内容

首篇「课程学习(Curriculum Learning)」2021综述论文
专知会员服务
48+阅读 · 2021年1月31日
【SIGIR2020】学习词项区分性,Learning Term Discrimination
专知会员服务
15+阅读 · 2020年4月28日
Hierarchically Structured Meta-learning
CreateAMind
23+阅读 · 2019年5月22日
深度自进化聚类:Deep Self-Evolution Clustering
我爱读PAMI
14+阅读 · 2019年4月13日
人工智能 | UAI 2019等国际会议信息4条
Call4Papers
6+阅读 · 2019年1月14日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
Disentangled的假设的探讨
CreateAMind
9+阅读 · 2018年12月10日
Hierarchical Disentangled Representations
CreateAMind
4+阅读 · 2018年4月15日
条件GAN重大改进!cGANs with Projection Discriminator
CreateAMind
8+阅读 · 2018年2月7日
无问西东,只问哈希
线性资本
3+阅读 · 2018年1月18日
A survey on deep hashing for image retrieval
Arxiv
14+阅读 · 2020年6月10日
Arxiv
5+阅读 · 2018年3月6日
VIP会员
相关VIP内容
相关资讯
Top
微信扫码咨询专知VIP会员