Physics-based deep learning frameworks have shown to be effective in accurately modeling the dynamics of complex physical systems with generalization capability across problem inputs. However, time-independent problems pose the challenge of requiring long-range exchange of information across the computational domain for obtaining accurate predictions. In the context of graph neural networks (GNNs), this calls for deeper networks, which, in turn, may compromise or slow down the training process. In this work, we present two GNN architectures to overcome this challenge - the Edge Augmented GNN and the Multi-GNN. We show that both these networks perform significantly better (by a factor of 1.5 to 2) than baseline methods when applied to time-independent solid mechanics problems. Furthermore, the proposed architectures generalize well to unseen domains, boundary conditions, and materials. Here, the treatment of variable domains is facilitated by a novel coordinate transformation that enables rotation and translation invariance. By broadening the range of problems that neural operators based on graph neural networks can tackle, this paper provides the groundwork for their application to complex scientific and industrial settings.
翻译:摘要:基于物理的深度学习框架已经证明在准确建模复杂物理系统的动力学以及跨问题输入的泛化能力方面非常有效。然而,时变问题面临的挑战在于需要在计算域内进行远程信息交换才能获得准确的预测结果。在图神经网络(GNN)的环境下,这要求更深的网络,进而可能会损失运算速度或减缓训练过程。在这项工作中,我们提出了两种GNN架构来克服这个难题——边加权GNN和多重GNN。我们展示了这两种网络在应用于时变固体力学问题时,相比基线方法表现显著优异(提高了1.5到2倍)。此外,所提出的架构能够良好地泛化到未知的领域、边界条件和材料。其中,变量域的处理是通过一种新型的坐标变换实现的,该变换使得网络具有旋转和平移不变性。通过扩大基于图神经网络的神经算子能够解决的问题范围,本文为它们在复杂科学和工业环境中的应用提供了基础。