Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.
翻译:最近,图形神经网络因其感应性质导致零截面的通用性,对模拟动态系统的模拟性神经网络引起了许多关注。同样,深学习框架中的物理知情导导偏差也表明在学习物理系统的动态方面表现优异。越来越多的文献试图将这两种方法结合起来。在这里,我们评估了13个不同的图形神经网络的性能,即汉密尔顿和拉格朗图形神经网络、图形神经组织及其具有明显限制和不同结构的变体。我们简要解释了理论提法,突出这些系统的感应偏向和图形结构的相似性和差异。我们对这些模型进行了评估,在弹簧、钟、重力和3D型变形固态系统方面表现优异。我们评估了这些模型,以比较在推出错误、节能和动力等节能数量以及可视系统大小的通用。我们的研究显示,GNNN具有更多诱导偏差,例如明确的制约以及电动和潜在能量的分解。我们对这些系统的模拟性能显著增强。我们评估这些模型系统比GNNNN系统更符合现实性,因此更能级的GNNN型级的模型化至GNUS级的系统规模更大规模演示级,提供了更大规模的升级的系统。