The popularity of deep learning techniques renewed the interest in neural architectures able to process complex structures that can be represented using graphs, inspired by Graph Neural Networks (GNNs). We focus our attention on the originally proposed GNN model of Scarselli et al. 2009, which encodes the state of the nodes of the graph by means of an iterative diffusion procedure that, during the learning stage, must be computed at every epoch, until the fixed point of a learnable state transition function is reached, propagating the information among the neighbouring nodes. We propose a novel approach to learning in GNNs, based on constrained optimization in the Lagrangian framework. Learning both the transition function and the node states is the outcome of a joint process, in which the state convergence procedure is implicitly expressed by a constraint satisfaction mechanism, avoiding iterative epoch-wise procedures and the network unfolding. Our computational structure searches for saddle points of the Lagrangian in the adjoint space composed of weights, nodes state variables and Lagrange multipliers. This process is further enhanced by multiple layers of constraints that accelerate the diffusion process. An experimental analysis shows that the proposed approach compares favourably with popular models on several benchmarks.
翻译:深层学习技术的普及重新激发了对神经结构的兴趣,这些神经结构能够通过图表神经网络(GNNS)的启发,以图表神经网络(GNNS)所启发的图表为代表的复杂结构。 我们集中关注最初提议的2009年斯卡塞利等人的GNN模型,该模型通过迭代传播程序将图形节点的状态编码,在学习阶段,每个时代都必须计算出,直到达到可学习的国家过渡功能的固定点,在相邻节点之间传播信息。我们提议以拉格兰加框架的有限优化为基础,在GNNS中学习新颖的方法。学习过渡功能和节点状态是一个联合进程的结果,在这个进程中,通过制约性满意度机制暗含了国家趋同程序,避免了迭接的偏近程序和网络的运行。我们在由重量、节点状态变量和拉格朗加乘数组成的连接空间对拉格朗加的马鞍进行计算结构搜索。这个过程由于加速扩散过程的多重制约而得到进一步的加强。一项实验性分析显示,对若干基准进行了积极的比较。