Graph Neural Networks (GNNs) are powerful tools for graph representation learning. Despite their rapid development, GNNs also faces some challenges, such as over-fitting, over-smoothing, and non-robustness. Previous works indicate that these problems can be alleviated by random dropping methods, which integrate noises into models by randomly masking parts of the input. However, some open-ended problems of random dropping on GNNs remain to solve. First, it is challenging to find a universal method that are suitable for all cases considering the divergence of different datasets and models. Second, random noises introduced to GNNs cause the incomplete coverage of parameters and unstable training process. In this paper, we propose a novel random dropping method called DropMessage, which performs dropping operations directly on the message matrix and can be applied to any message-passing GNNs. Furthermore, we elaborate the superiority of DropMessage: it stabilizes the training process by reducing sample variance; it keeps information diversity from the perspective of information theory, which makes it a theoretical upper bound of other methods. Also, we unify existing random dropping methods into our framework and analyze their effects on GNNs. To evaluate our proposed method, we conduct experiments that aims for multiple tasks on five public datasets and two industrial datasets with various backbone models. The experimental results show that DropMessage has both advantages of effectiveness and generalization.
翻译:神经网图( GNNs) 是图形代表学习的有力工具。 尽管其发展迅速, GNNs 也面临一些挑战, 如超装、超移动和非紫外线。 先前的工作表明, 这些问题可以通过随机投放方法缓解, 随机投放方法将噪音纳入模型, 随机掩射输入部分。 然而, 随机投放 GNNs 的优势仍有待解决。 首先, 找到一种通用方法, 适合所有案例, 同时考虑到不同数据集和模型的差异。 其次, 向 GNNs 引入随机噪声, 导致参数覆盖不完全, 且培训过程不稳定。 在本文中, 我们提议了一种新型随机投放方法, 直接在信息矩阵上投放操作, 并可以应用到任何通过信息传输 GNNs 的部件中。 此外, 我们详细描述“ 投放式” 的优势: 它通过减少样本差异来稳定培训进程; 它从信息理论的角度保持信息多样性, 这使得它成为其他方法的理论的上层框框。 此外, 我们将现有的随机投放式的实验方法与“ ” GNSG 分析两种实验结果。