Physics-informed neural networks (PINNs) have been widely applied in different fields due to their effectiveness in solving partial differential equations (PDEs). However, the accuracy and efficiency of PINNs need to be considerably improved for scientific and commercial use. To address this issue, we systematically propose a novel dimension-augmented physics-informed neural network (DaPINN), which simultaneously and significantly improves the accuracy and efficiency of the PINN. In the DaPINN model, we introduce inductive bias in the neural network to enhance network generalizability by adding a special regularization term to the loss function. Furthermore, we manipulate the network input dimension by inserting additional sample features and incorporating the expanded dimensionality in the loss function. Moreover, we verify the effectiveness of power series augmentation, Fourier series augmentation and replica augmentation, in both forward and backward problems. In most experiments, the error of DaPINN is 1$\sim$2 orders of magnitude lower than that of PINN. The results show that the DaPINN outperforms the original PINN in terms of both accuracy and efficiency with a reduced dependence on the number of sample points. We also discuss the complexity of the DaPINN and its compatibility with other methods.
翻译:由于在解决部分差异方程式(PDE)方面的效力,物理知情神经网络(PINNs)已在不同领域广泛应用。然而,需要大幅提高PINNs的准确性和效率,以便用于科学和商业用途。为解决这一问题,我们系统地提出一个新的层面强化物理知情神经网络(DAPINN),同时大大提高PINN的准确性和效率。在DAPINN模型中,我们引入神经网络的诱导偏差,通过在损失功能中增加一个特殊的正规化术语,提高网络的通用性。此外,我们通过在损失功能中添加更多的样本特征和纳入扩大的维度来操作网络输入的层面。此外,我们还核实电力序列增强的实效、四倍系列增强和复制在前向和后向问题中的扩展。在多数实验中,DAPINN的错误是比PINN的低1 /sim 2级。结果显示,DPINNN在准确性和效率方面超越了最初的PINN,我们还从降低其对DNPI其他样本数的依赖性和复杂性。