Graphon models provide a flexible nonparametric framework for estimating latent connectivity probabilities in networks, enabling a range of downstream applications such as link prediction and data augmentation. However, accurate graphon estimation typically requires a large graph, whereas in practice, one often only observes a small-sized network. One approach to addressing this issue is to adopt a transfer learning framework, which aims to improve estimation in a small target graph by leveraging structural information from a larger, related source graph. In this paper, we propose a novel method, namely GTRANS, a transfer learning framework that integrates neighborhood smoothing and Gromov-Wasserstein optimal transport to align and transfer structural patterns between graphs. To prevent negative transfer, GTRANS includes an adaptive debiasing mechanism that identifies and corrects for target-specific deviations via residual smoothing. We provide theoretical guarantees on the stability of the estimated alignment matrix and demonstrate the effectiveness of GTRANS in improving the accuracy of target graph estimation through extensive synthetic and real data experiments. These improvements translate directly to enhanced performance in downstream applications, such as the graph classification task and the link prediction task.
翻译:图模型为网络中的潜在连接概率估计提供了一个灵活的非参数框架,支持链接预测和数据增强等一系列下游应用。然而,精确的图估计通常需要大规模图数据,而实践中往往仅能观测到小规模网络。解决此问题的一种方法是采用迁移学习框架,旨在通过利用来自更大规模相关源图的结构信息来提升小规模目标图的估计精度。本文提出了一种新方法GTRANS,这是一个结合邻域平滑与Gromov-Wasserstein最优传输的迁移学习框架,用于对齐和迁移图间的结构模式。为避免负迁移,GTRANS引入了自适应去偏机制,通过残差平滑识别并校正目标图特有的偏差。我们为估计对齐矩阵的稳定性提供了理论保证,并通过大量合成与真实数据实验证明了GTRANS在提升目标图估计精度方面的有效性。这些改进直接转化为下游应用性能的增强,如图分类任务和链接预测任务。