Given a resource-rich source graph and a resource-scarce target graph, how can we effectively transfer knowledge across graphs and ensure a good generalization performance? In many high-impact domains (e.g., brain networks and molecular graphs), collecting and annotating data is prohibitively expensive and time-consuming, which makes domain adaptation an attractive option to alleviate the label scarcity issue. In light of this, the state-of-the-art methods focus on deriving domain-invariant graph representation that minimizes the domain discrepancy. However, it has recently been shown that a small domain discrepancy loss may not always guarantee a good generalization performance, especially in the presence of disparate graph structures and label distribution shifts. In this paper, we present TRANSNET, a generic learning framework for augmenting knowledge transfer across graphs. In particular, we introduce a novel notion named trinity signal that can naturally formulate various graph signals at different granularity (e.g., node attributes, edges, and subgraphs). With that, we further propose a domain unification module together with a trinity-signal mixup scheme to jointly minimize the domain discrepancy and augment the knowledge transfer across graphs. Finally, comprehensive empirical results show that TRANSNET outperforms all existing approaches on seven benchmark datasets by a significant margin.
翻译:根据资源丰富的源图和资源侵蚀目标图,我们如何能够在图表之间有效地转让知识并确保良好的概括性性性能?在许多影响大的领域(例如脑网络和分子图),收集和注解数据费用高得令人望而却步,而且耗费时间,这使得域性调整成为缓解标签稀缺问题的有吸引力的选择。有鉴于此,最先进的方法侧重于获取域-异性图显示,从而最大限度地缩小域差异。然而,最近显示,小域差异损失不一定总能保证良好的概括性业绩,特别是在不同的图表结构和标签分布变化存在的情况下。在本文件中,我们介绍TRANSNET,一个用于增加跨图的知识转移的通用学习框架。特别是,我们引入了名为三元性信号的新概念,可以自然地在不同颗粒度(例如,诺德属性、边缘和子图)绘制各种图形信号。我们进一步提议一个域域统一模块,同时提出一个三元性设计混合组合计划,可以保证良好的通用性图结构和标签分布式分布式分布式分布式分布式图,从而共同最大限度地减少现有七级基准数据传输法。