Learning compact representation is vital and challenging for large scale multimedia data. Cross-view/cross-modal hashing for effective binary representation learning has received significant attention with exponentially growing availability of multimedia content. Most existing cross-view hashing algorithms emphasize the similarities in individual views, which are then connected via cross-view similarities. In this work, we focus on the exploitation of the discriminative information from different views, and propose an end-to-end method to learn semantic-preserving and discriminative binary representation, dubbed Discriminative Cross-View Hashing (DCVH), in light of learning multitasking binary representation for various tasks including cross-view retrieval, image-to-image retrieval, and image annotation/tagging. The proposed DCVH has the following key components. First, it uses convolutional neural network (CNN) based nonlinear hashing functions and multilabel classification for both images and texts simultaneously. Such hashing functions achieve effective continuous relaxation during training without explicit quantization loss by using Direct Binary Embedding (DBE) layers. Second, we propose an effective view alignment via Hamming distance minimization, which is efficiently accomplished by bit-wise XOR operation. Extensive experiments on two image-text benchmark datasets demonstrate that DCVH outperforms state-of-the-art cross-view hashing algorithms as well as single-view image hashing algorithms. In addition, DCVH can provide competitive performance for image annotation/tagging.


翻译:对大型多媒体数据而言,学习的缩略语对于大型多媒体数据至关重要且具有挑战性。 交叉视图/跨模式的散列对于有效的二进制学习已受到极大关注,因为多媒体内容的可用性急剧增加。 大多数现有的交叉视图散列算法强调个人观点的相似性,然后通过交叉视图的相似性连接这些观点。 在这项工作中,我们侧重于利用来自不同观点的歧视性信息,并提议一种端对端方法来学习语义保存和歧视性的二进制代表制(DCVH),这是在学习多种任务,包括交叉视图检索、图像到图像检索和图像注释/粘贴。 拟议的DCVH有以下关键组成部分。 首先,我们利用基于非线性观点的进化神经网络功能和多标签分类同时学习图像和文本。 在培训期间,由于使用直接二进制的嵌入(DBE)二进制二进制二进制的二进制二进制的双进制结构,我们提出一个有效的跨进制图像调整,然后,我们建议通过S-RO 快速的进制的进式图像,通过S-进式的进式的进式图像,我们提出一个通过S- 进式的进式的进式的进式的进式的进式的进式的进式的进式的进式的进式的进式的进式的进式的进式的进式的进式的进式的进式的进式的进式的进式的进式的进式的进制数据。

9
下载
关闭预览

相关内容

【ICML2020】多视角对比图表示学习,Contrastive Multi-View GRL
专知会员服务
77+阅读 · 2020年6月11日
【SIGIR2020】学习词项区分性,Learning Term Discrimination
专知会员服务
15+阅读 · 2020年4月28日
100+篇《自监督学习(Self-Supervised Learning)》论文最新合集
专知会员服务
161+阅读 · 2020年3月18日
专知会员服务
53+阅读 · 2019年12月22日
Transferring Knowledge across Learning Processes
CreateAMind
25+阅读 · 2019年5月18日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
RL 真经
CreateAMind
5+阅读 · 2018年12月28日
Disentangled的假设的探讨
CreateAMind
9+阅读 · 2018年12月10日
disentangled-representation-papers
CreateAMind
26+阅读 · 2018年9月12日
Auto-Encoding GAN
CreateAMind
7+阅读 · 2017年8月4日
Arxiv
5+阅读 · 2020年3月26日
VIP会员
Top
微信扫码咨询专知VIP会员