In this paper, we propose the augmented physics-informed neural network (APINN), which adopts soft and trainable domain decomposition and flexible parameter sharing to further improve the extended PINN (XPINN) as well as the vanilla PINN methods. In particular, a trainable gate network is employed to mimic the hard and discrete decomposition of XPINN, which can be flexibly fine-tuned for discovering a potentially better partition. It weight-averages several sub-nets as the output of APINN. APINN does not require complex interface conditions, and its sub-nets can take advantage of all training samples rather than just part of the training data in their subdomains. Lastly, each sub-net shares part of the common parameters to capture the similar components in each decomposed function. Furthermore, following the PINN generalization theory in Hu et al. [2021], we show that APINN can improve generalization by proper gate network initialization and general domain & function decomposition. Extensive experiments on different types of PDEs demonstrate how APINN improves the PINN and XPINN methods. Specifically, we present examples where XPINN performs similarly to or worse than PINN, so that APINN can significantly improve both. We also show cases where XPINN is already better than PINN, so APINN can still slightly improve XPINN. Furthermore, we visualize the optimized gating networks and their optimization trajectories, and connect them with their performance, which helps discover the possibly optimal decomposition. Interestingly, if initialized by different decomposition, the performances of corresponding APINNs can differ drastically. This, in turn, shows the potential to design an optimal domain decomposition for the differential equation problem under consideration.
翻译:在本文中,我们建议增加物理学知情神经网络(APINN),它采用软的和可训练的域分解和灵活的参数共享,以进一步改进扩展的 PINN (XPINN) 和香草 PINN 方法。特别是,使用可训练的门网络来模仿 XPINN 的硬和离散分解,这种网络可以灵活地微调,以发现一个可能更好的分区。作为APINN 的输出,它平均使用几个子网。APINN不需要复杂的接口条件,它的子网可以利用所有的培训样本,而不仅仅是其子网中培训数据的一部分。最后,每个子网共享共同参数的一部分,以捕捉每个分解功能中的类似组件。此外,在Hu et al. (2021) 的 PINNPNU 总体理论之后,我们显示,APNNNN可以通过适当的门网络初始化和一般域域和功能解剖,对不同类型的PIN 进行广泛的实验,从而显示AINNPI 的初始性能更好进行更精确的运行。