While Graph Neural Networks (GNNs) are popular in the deep learning community, they suffer from several challenges including over-smoothing, over-squashing, and gradient vanishing. Recently, a series of models have attempted to relieve these issues by first augmenting the node features and then imposing node-wise functions based on Multi-Layer Perceptron (MLP), which are widely referred to as GA-MLP models. However, while GA-MLP models enjoy deeper architectures for better accuracy, their efficiency largely deteriorates. Moreover, popular acceleration techniques such as stochastic-version or data-parallelism cannot be effectively applied due to the dependency among samples (i.e., nodes) in graphs. To address these issues, in this paper, instead of data parallelism, we propose a parallel graph deep learning Alternating Direction Method of Multipliers (pdADMM-G) framework to achieve model parallelism: parameters in each layer of GA-MLP models can be updated in parallel. The extended pdADMM-G-Q algorithm reduces communication costs by introducing the quantization technique. Theoretical convergence to a (quantized) stationary point of the pdADMM-G algorithm and the pdADMM-G-Q algorithm is provided with a sublinear convergence rate $o(1/k)$, where $k$ is the number of iterations. Extensive experiments demonstrate the convergence of two proposed algorithms. Moreover, they lead to a more massive speedup and better performance than all state-of-the-art comparison methods on nine benchmark datasets. Last but not least, the proposed pdADMM-G-Q algorithm reduces communication overheads by up to $45\%$ without loss of performance. Our code is available at \url{https://github.com/xianggebenben/pdADMM-G}.
翻译:虽然GA-MLP(GM-MLP)模型在深层学习界很受欢迎,但它们也面临若干挑战,包括超超音速、超震和梯度消失。最近,一系列模型试图通过首先增加节点特征,然后根据多光谱(MLP)(MLP)模型(GA-MLP)实施节点功能。然而,尽管GA-MLP模型拥有更深层次的架构,以便提高准确性,但其效率大为恶化。此外,由于图中样本(即节点)的依赖性,无法有效应用诸如超音速变换或数据极度等流行加速技术。为了解决这些问题,在本文件中,而不是数据平行,我们提议一个平行的图形深度学习解调方向方法(pdADMMM-G-G-MLMM)模型实现模型平行化:GA-MLP模型的每一层的参数可以同步更新。在图中扩展的PDMM-Q-Q(O-al-Q)调调降9Q(oG-al-al-al-al-al-alationalationalationalationalationalation) QQQQQQ-al-al-rational-al-al-modalationalationalationalationalationalationalg-movallupation) 数据比,通过引入了我们G)所有G-rationalational-modxxxxxxxxxxxxxxxxxxx,它提供一个更低运算法。