Graph Neural Networks (GNNs) have greatly advanced the semi-supervised node classification task on graphs. The majority of existing GNNs are trained in an end-to-end manner that can be viewed as tackling a bi-level optimization problem. This process is often inefficient in computation and memory usage. In this work, we propose a new optimization framework for semi-supervised learning on graphs. The proposed framework can be conveniently solved by the alternating optimization algorithms, resulting in significantly improved efficiency. Extensive experiments demonstrate that the proposed method can achieve comparable or better performance with state-of-the-art baselines while it has significantly better computation and memory efficiency.
翻译:神经网络图(GNNs)大大推进了图形上的半监督节点分类任务,大多数现有的GNNs都接受了端到端的培训,可以被视为解决双级优化问题。这个过程在计算和记忆使用方面往往效率低下。在这项工作中,我们提出一个新的优化框架,用于在图形上进行半监督的学习。提议的框架可以通过交替优化算法来方便地解决,从而大大提高效率。广泛的实验表明,拟议的方法可以用最先进的基线实现可比较或更好的性能,而其计算和记忆效率则大大提高。