The neural network-based approach to solving partial differential equations has attracted considerable attention due to its simplicity and flexibility in representing the solution of the partial differential equation. In training a neural network, the network learns global features corresponding to low-frequency components while high-frequency components are approximated at a much slower rate. For a class of equations in which the solution contains a wide range of scales, the network training process can suffer from slow convergence and low accuracy due to its inability to capture the high-frequency components. In this work, we propose a hierarchical approach to improve the convergence rate and accuracy of the neural network solution to partial differential equations. The proposed method comprises multi-training levels in which a newly introduced neural network is guided to learn the residual of the previous level approximation. By the nature of neural networks' training process, the high-level correction is inclined to capture the high-frequency components. We validate the efficiency and robustness of the proposed hierarchical approach through a suite of linear and nonlinear partial differential equations.
翻译:以神经网络为基础的解决部分差异方程式的方法由于其在代表部分差异方程式的解决方案方面的简单性和灵活性而引起了相当的重视。在培训神经网络时,网络学习与低频组件相对应的全球特征,而高频组件的近似速度要慢得多。对于解决方案包含广泛规模的某类方程式,网络培训过程可能因无法捕捉高频组件而缓慢趋同和精确度低。在这项工作中,我们提议了一种等级化方法,以提高神经网络解决方案对部分差异方程式的趋同率和准确性。拟议方法包括多种培训级别,其中新引入的神经网络以学习先前水平近距离的剩余部分为指导。根据神经网络培训过程的性质,高层校正倾向于捕捉高频组件。我们通过一套线性和非线性部分差异方程式验证拟议等级方法的效率和稳健性。