We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE. In our model, the layer structure and topology correspond to the discretisation choices of temporal and spatial operators. Our approach allows a principled development of a broad new class of GNNs that are able to address the common plights of graph learning models such as depth, oversmoothing, and bottlenecks. Key to the success of our models are stability with respect to perturbations in the data and this is addressed for both implicit and explicit discretisation schemes. We develop linear and nonlinear versions of GRAND, which achieve competitive results on many standard graph benchmarks.
翻译:我们提出“图形神经扩散”(GRAND),将图表的深度学习作为一种连续的传播过程,并将图形神经网络(GNNS)作为基础的PDE的离散处理。在我们的模式中,层结构和地貌学与时间和空间操作员的离散选择相对应。我们的方法允许有原则地开发广泛的新的GNNs类别,能够解决图学模型的共同困境,如深度、过度移动和瓶颈。我们模型成功的关键是数据扰动方面的稳定性,这既针对隐含又明显离散计划。我们开发了GrandD的线性和非线性版本,在许多标准的图形基准上取得竞争性结果。