Graph Neural Networks (GNNs) are widely used deep learning models that learn meaningful representations from graph-structured data. Due to the finite nature of the underlying recurrent structure, current GNN methods may struggle to capture long-range dependencies in underlying graphs. To overcome this difficulty, we propose a graph learning framework, called Implicit Graph Neural Networks (IGNN), where predictions are based on the solution of a fixed-point equilibrium equation involving implicitly defined "state" vectors. We use the Perron-Frobenius theory to derive sufficient conditions that ensure well-posedness of the framework. Leveraging implicit differentiation, we derive a tractable projected gradient descent method to train the framework. Experiments on a comprehensive range of tasks show that IGNNs consistently capture long-range dependencies and outperform the state-of-the-art GNN models.
翻译:神经网络(GNN)是广泛使用的深层次学习模型,从图形结构数据中学习有意义的表示。由于基本经常性结构的有限性质,目前的GNN方法可能难以在基本图形中捕捉远距离依赖性。为了克服这一困难,我们提议了一个称为隐形神经网络(INGN)的图形学习框架,其中预测的基础是一个包含隐含定义的“状态”矢量的固定点平衡方程式的解决方案。我们使用 Perron-Frobenius 理论来获取足够的条件,以确保框架的正确性。利用隐含的区别,我们得出一种可移动的预测梯度下降法来培训框架。关于全面任务范围的实验表明,IGNNs始终捕捉长距离依赖性,并超越了GNN模型的状态。