Abundant real-world data can be naturally represented by large-scale networks, which demands efficient and effective learning algorithms. At the same time, labels may only be available for some networks, which demands these algorithms to be able to adapt to unlabeled networks. Domain-adaptive hash learning has enjoyed considerable success in the computer vision community in many practical tasks due to its lower cost in both retrieval time and storage footprint. However, it has not been applied to multiple-domain networks. In this work, we bridge this gap by developing an unsupervised domain-adaptive hash learning method for networks, dubbed UDAH. Specifically, we develop four {task-specific yet correlated} components: (1) network structure preservation via a hard groupwise contrastive loss, (2) relaxation-free supervised hashing, (3) cross-domain intersected discriminators, and (4) semantic center alignment. We conduct a wide range of experiments to evaluate the effectiveness and efficiency of our method on a range of tasks including link prediction, node classification, and neighbor recommendation. Our evaluation results demonstrate that our model achieves better performance than the state-of-the-art conventional discrete embedding methods over all the tasks.
翻译:大型网络可以自然地代表大量真实世界数据,而大型网络要求高效率和高效力的学习算法。同时,标签只能提供给某些网络,要求这些算法能够适应无标签的网络。由于计算机视觉界在检索时间和储存足迹方面成本较低,因此在很多实际任务中,适应性散列法学习在计算机视觉界中取得了相当大的成功。然而,它没有应用于多域网络。在这项工作中,我们通过为网络开发一种不受监督的域适应性散列学习方法来弥补这一差距,称为UDAH。具体地说,我们开发了四种{task 特定但相互关联的} 组成部分:(1) 网络结构通过硬集体对比损失加以维护,(2) 放松性监督的散列,(3) 交叉交叉的交叉的相互歧视,以及(4) 语系中心协调。我们进行了广泛的实验,以评估我们方法在一系列任务上的有效性和效率,包括连接预测、节点分类和邻接建议。我们的评价结果表明,我们的模型比常规的嵌入方法更出色。