The potential of learned models for fundamental scientific research and discovery is drawing increasing attention. Physics-informed neural networks (PINNs), where the loss function directly embeds governing equations of scientific phenomena, is one of the key techniques at the forefront of recent advances. These models are typically trained using stochastic gradient descent, akin to their standard deep learning counterparts. However, in this paper, we carry out a simple analysis showing that the loss functions arising in PINNs lead to a high degree of complexity and ruggedness that may not be conducive for gradient-descent and its variants. It is therefore clear that the use of neuro-evolutionary algorithms as alternatives to gradient descent for PINNs may be a better choice. Our claim is strongly supported herein by benchmark problems and baseline results demonstrating that convergence rates achieved by neuroevolution can indeed surpass that of gradient descent for PINN training. Furthermore, implementing neuroevolution with JAX leads to orders of magnitude speedup relative to standard implementations.
翻译:物理-知情神经网络(PINNs)的丢失功能直接嵌入了科学现象的方程式,是最近进展中的关键技术之一。这些模型通常使用与标准的深层学习对应方相似的随机梯度梯度下降法进行培训,但在本文件中,我们进行了一项简单分析,表明PINNs中产生的损失功能导致高度复杂和崎岖,可能不利于梯度和变异。因此,使用神经进化算法替代PINNs的梯度下降可能是更好的选择。我们的索赔得到了基准问题和基线结果的有力支持,表明神经进化所实现的趋同率确实超过PINN培训的梯度下降率。此外,与JAX一起实施的神经进化导致与标准实施相比规模的加速。</s>