Adaptive training methods for Physics-informed neural network (PINN) require dedicated constructions of the distribution of weights assigned at each training sample. To efficiently seek such an optimal weight distribution is not a simple task and most existing methods choose the adaptive weights based on approximating the full distribution or the maximum of residuals. In this paper, we show that the bottleneck in the adaptive choice of samples for training efficiency is the behavior of the tail distribution of the numerical residual. Thus, we propose the Residual-Quantile Adjustment (RQA) method for a better weight choice for each training sample. After initially setting the weights proportional to the $p$-th power of the residual, our RQA method reassign all weights above $q$-quantile ($90\%$ for example) to the median value, so that the weight follows a quantile-adjusted distribution derived from the residuals. This iterative reweighting technique, on the other hand, is also very easy to implement. Experiment results show that the proposed method can outperform several adaptive methods on various partial differential equation (PDE) problems.
翻译:物理知情神经网络(PINN)的适应性培训方法要求对每个培训样本分配的重量进行专门设计。高效地寻求这种最佳重量分布并不是一项简单的任务,而且大多数现有方法都根据接近完全分布或最大残留量来选择适应性重量。在本文中,我们表明,为培训效率而适应性选择样本时的瓶颈是数量残留量的尾部分布行为。因此,我们建议为每个培训样本选择更好的重量选择残余量调整方法。在最初确定与剩余量的美元-美元/美元比例成比例的重量后,我们的RQA方法将所有重量超过$-quantile(90美元/%美元)的重量重新分配到中位值,这样,重量在根据残余量得出的量调整性分布后,根据量调整的分布,重量也非常容易执行。实验结果表明,拟议的方法可以超越不同部分差异方程式问题的若干适应方法。