Graph neural networks (GNNs) manifest pathologies including over-smoothing and limited discriminating power as a result of suboptimally expressive aggregating mechanisms. We herein present a unifying framework for stochastic aggregation (STAG) in GNNs, where noise is (adaptively) injected into the aggregation process from the neighborhood to form node embeddings. We provide theoretical arguments that STAG models, with little overhead, remedy both of the aforementioned problems. In addition to fixed-noise models, we also propose probabilistic versions of STAG models and a variational inference framework to learn the noise posterior. We conduct illustrative experiments clearly targeting oversmoothing and multiset aggregation limitations. Furthermore, STAG enhances general performance of GNNs demonstrated by competitive performance in common citation and molecule graph benchmark datasets.
翻译:图像神经网络(GNNs) 明显的病理,包括过度移动和有限的歧视力量,这些病因是极不切实际的表达集合机制造成的。我们在此为GNNs的随机聚合(STAG)提供一个统一框架,其中噪音(适应性)从周边注入聚合过程,形成节点嵌入。我们提供了理论论点,认为STAG模型,几乎没有间接费用,可以解决上述两个问题。除了固定噪音模型外,我们还提出了STAG模型的概率性能版本和一个用于学习噪音远地点的变异推论框架。我们进行示范性实验,明确针对超移动和多设置集合限制。此外,STAG提高GNs的一般性能,通过通用引用和分子图形基准数据集中的竞争性性能表现出来。