Physics Informed Neural Networks (PINN) are algorithms from deep learning leveraging physical laws by including partial differential equations (PDE) together with a respective set of boundary and initial conditions (BC / IC) as penalty terms into their loss function. As the PDE, BC and IC loss function parts can significantly differ in magnitudes, due to their underlying physical units or stochasticity of initialisation, training of PINNs may suffer from severe convergence and efficiency problems, causing PINNs to stay beyond desirable approximation quality. In this work, we observe the significant role of correctly weighting the combination of multiple competitive loss functions for training PINNs effectively. To that end, we implement and evaluate different methods aiming at balancing the contributions of multiple terms of the PINNs loss function and their gradients. After review of three existing loss scaling approaches (Learning Rate Annealing, GradNorm as well as SoftAdapt), we propose a novel self-adaptive loss balancing of PINNs called ReLoBRaLo (Relative Loss Balancing with Random Lookback). Finally, the performance of ReLoBRaLo is compared and verified against these approaches by solving both forward as well as inverse problems on three benchmark PDEs for PINNs: Burgers' equation, Kirchhoff's plate bending equation and Helmholtz's equation. Our simulation studies show that ReLoBRaLo training is much faster and achieves higher accuracy than training PINNs with other balancing methods and hence is very effective and increases sustainability of PINNs algorithms. The adaptability of ReLoBRaLo illustrates robustness across different PDE problem settings. The proposed method can also be employed to the wider class of penalised optimisation problems, including PDE-constrained and Sobolev training apart from the studied PINNs examples.
翻译:NINDE网络(PINN)是深层学习利用物理法的算法,将部分差异方程式(PDE)和一组不同的边界和初始条件(BC/IC)作为惩罚条件纳入损失功能。由于PDE、BC和IC损失函数部分由于其内在物理单位或初始化的随机性,其规模可能大不相同,对PINDN的培训可能面临严重的趋同和效率问题,导致PINN保持超出理想近似质量。在这项工作中,我们看到将部分竞争性损失函数(PDE)与一套不同的边界和初始条件(BC/IC)作为惩罚条件(BC/IC)作为惩罚条件。为此,我们实施和评价了不同的方法,旨在平衡PINNLIM损失函数及其梯度的多重条件。在审查了三种现有的损失缩放方法(PNanaling、GradNorm以及SoftAdapt)之后,我们建议一种全新的自我适应损失平衡PINNs公司称为ReLABRALO(ReLVAL LUD) 和RLOLOLOLILLLLLLADAD 和RVLOLOLILLLILVLLLA 方法, 之后,我们要通过LOLOLVLLLLLLLLLLLLLLLLLLADADLLLLLLBS 的模拟方法, 和RBLLLLLLLLLLLLLLLBA 方法比LLBSDBSDBSBSDFT 和RBSDF 方法进行核查这些方法的模拟的模拟的模拟的模拟的模拟方法,通过比LL 和RLLLLLLLLLLLLLLLL 方法,通过比LLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLL 方法,最后方法,最后的成绩和R) 和RLLLLLLLLLLLLLLLLLLLLLLLLLLL