This paper introduces neuroevolution for solving differential equations. The solution is obtained through optimizing a deep neural network whose loss function is defined by the residual terms from the differential equations. Recent studies have focused on learning such physics-informed neural networks through stochastic gradient descent (SGD) variants, yet they face the difficulty of obtaining an accurate solution due to optimization challenges. In the context of solving differential equations, we are faced with the problem of finding globally optimum parameters of the network, instead of being concerned with out-of-sample generalization. SGD, which searches along a single gradient direction, is prone to become trapped in local optima, so it may not be the best approach here. In contrast, neuroevolution carries out a parallel exploration of diverse solutions with the goal of circumventing local optima. It could potentially find more accurate solutions with better optimized neural networks. However, neuroevolution can be slow, raising tractability issues in practice. With that in mind, a novel and computationally efficient transfer neuroevolution algorithm is proposed in this paper. Our method is capable of exploiting relevant experiential priors when solving a new problem, with adaptation to protect against the risk of negative transfer. The algorithm is applied on a variety of differential equations to empirically demonstrate that transfer neuroevolution can indeed achieve better accuracy and faster convergence than SGD. The experimental outcomes thus establish transfer neuroevolution as a noteworthy approach for solving differential equations, one that has never been studied in the past. Our work expands the resource of available algorithms for optimizing physics-informed neural networks.
翻译:本文引入了解决差异方程式的神经进化。 解决方案是通过优化一个深度神经网络获得的, 其损失功能由差异方程式的剩余条件来界定。 最近的研究侧重于通过随机梯度梯度下降变异(SGD) 来学习这些物理知情神经网络, 但是由于优化挑战, 难以找到准确的解决办法。 但是, 在解决差异方程式的背景下, 我们面临着寻找全球最佳网络参数的问题, 而不是关注超模范的概括化。 SGD, 其损失功能由差异方程式的剩余条件来界定。 SGD 沿单一梯度方向搜索的 SGD 容易被困在本地opima 中, 因此它可能不是最好的方法。 相反, 神经进化同时探索各种解决方案, 绕过本地的偏斜度梯度下降( SDGD) 。 然而, 神经进化可以缓慢地找到更准确的解决方案, 引起实践中的易动性问题。 考虑到这一点, 新的和计算性变异变法的算法方法是在本文件中建议的。 我们的方法能够利用前的进化前的进进变法, 在解决新的神经变法变法变法的变法中, 这样的变法中, 将更精确的变法变法的演进法可以更好地演进法 。