In this paper, we introduce a novel approach to multi-graph embedding called graph fusion encoder embedding. The method is designed to work with multiple graphs that share a common vertex set. Under the supervised learning setting, we show that the resulting embedding exhibits a surprising yet highly desirable "synergistic effect": for sufficiently large vertex size, the vertex classification accuracy always benefits from additional graphs. We provide a mathematical proof of this effect under the stochastic block model, and identify the necessary and sufficient condition for asymptotically perfect classification. The simulations and real data experiments confirm the superiority of the proposed method, which consistently outperforms recent benchmark methods in classification.
翻译:在本文中,我们介绍了一种新的多图嵌入方法,称为图融合编码器嵌入。该方法旨在处理共享公共顶点集的多个图。在监督学习设置下,我们展示了由此产生的嵌入呈现出惊人但极有益的“协同效应”:对于足够大的顶点大小,顶点分类准确度总是从额外的图中受益。我们在随机块模型下提供了数学证明,并确定了渐近完美分类的必要和充分条件。模拟和真实数据实验证实了所提出方法的优越性,该方法始终优于当前分类基准方法。