This paper discusses the capacity of graph neural networks to learn the functional form of ordinary differential equations that govern dynamics on complex networks. We propose necessary elements for such a problem, namely, inductive biases, a neural network architecture and a learning task. Statistical learning theory suggests that generalisation power of neural networks relies on independence and identical distribution (i.i.d.)\ of training and testing data. Although this assumption together with an appropriate neural architecture and a learning mechanism is sufficient for accurate out-of-sample predictions of dynamics such as, e.g.\ mass-action kinetics, by studying the out-of-distribution generalisation in the case of diffusion dynamics, we find that the neural network model: (i) has a generalisation capacity that depends on the first moment of the initial value data distribution; (ii) learns the non-dissipative nature of dynamics implicitly; and (iii) the model's accuracy resolution limit is of order $\mathcal{O}(1/\sqrt{n})$ for a system of size $n$.
翻译:本文讨论了图形神经网络学习管理复杂网络动态的普通差异方程式功能形式的功能性神经网络的能力。我们为这一问题提出了必要的要素,即感性偏向、神经网络架构和学习任务。统计学习理论表明,神经网络的概括性能力取决于独立性和相同分布(一.d.)\ 培训和测试数据。虽然这一假设与适当的神经结构和学习机制一起,足以通过研究扩散动态的分布范围外概括,对动态作出准确的抽样预测,例如,\ 大规模运动动能学,但我们发现,神经网络模型:(一) 具有一般化能力,取决于最初价值数据分布的最初时刻;(二) 隐含地了解动态的无差异性质;(三) 模型的准确度限值为$-mathcal{O}(1/\qrt{n},对于一个规模为$的系统来说,该模型的准确度限值为$-mathc{O}(1/\qrt{n}。