A prominent paradigm for graph neural networks is based on the message passing framework. In this framework, information communication is realized only between neighboring nodes. The challenge of approaches that use this paradigm is to ensure efficient and accurate \textit{long distance communication} between nodes, as deep convolutional networks are prone to over-smoothing. In this paper, we present a novel method based on time derivative graph diffusion (TIDE), with a learnable time parameter. Our approach allows to adapt the spatial extent of diffusion across different tasks and network channels, thus enabling medium and long-distance communication efficiently. Furthermore, we show that our architecture directly enables local message passing and thus inherits from the expressive power of local message passing approaches. We show that on widely used graph benchmarks we achieve comparable performance and on a synthetic mesh dataset we outperform state-of-the-art methods like GCN or GRAND by a significant margin.
翻译:图形神经网络的突出范例基于传递信息的框架。 在这个框架中, 信息通信只能在相邻节点之间实现。 使用这一模式的方法的挑战在于确保节点之间高效和准确的\ textit{ 长距离通信}, 因为深层的革命网络容易过度移动。 在本文中, 我们展示了基于时间衍生图扩散的新颖方法( TIDE ), 以及一个可学习的时间参数。 我们的方法允许调整不同任务和网络渠道的传播空间范围, 从而能够高效地进行中、 长距离通信 。 此外, 我们展示了我们的建筑直接允许本地信息传递, 从而继承了本地信息传递方法的表达力。 我们展示了在广泛使用的图表基准上, 我们取得了可比的绩效, 在合成网格数据集上,我们比GCN或GRAND等最先进的方法高出了显著的距离。