The least squares method with deep neural networks as function parametrization has been applied to solve certain high-dimensional partial differential equations (PDEs) successfully; however, its convergence is slow and might not be guaranteed even within a simple class of PDEs. To improve the convergence of the network-based least squares model, we introduce a novel self-paced learning framework, SelectNet, which quantifies the difficulty of training samples, treats samples equally in the early stage of training, and slowly explores more challenging samples, e.g., samples with larger residual errors, mimicking the human cognitive process for more efficient learning. In particular, a selection network and the PDE solution network are trained simultaneously; the selection network adaptively weighting the training samples of the solution network achieving the goal of self-paced learning. Numerical examples indicate that the proposed SelectNet model outperforms existing models on the convergence speed and the convergence robustness, especially for low-regularity solutions.
翻译:运用了以深神经网络为基础的最小平方法方法,作为功能平衡法,成功地解决某些高维部分差异方程式(PDEs);然而,其趋同速度缓慢,即使在一个简单的PDEs类别中也可能得不到保证。为了改进基于网络的最小方形模型的趋同,我们引入了一个全新的自定进度学习框架,即选择网,它量化了培训样本的难度,在培训的早期阶段对样本进行同等处理,并缓慢地探索更具挑战性的样本,例如,具有较大残余误差的样本,模仿人类认知过程,以便更有效地学习。特别是,选择网和PDE解决方案网络同时接受培训;选择网对解决方案网络的培训样本进行适应性加权,以达到自定速度学习的目标。数字实例表明,拟议的选择网模型比现有模型在趋同速度和趋同强度方面优于现有模型,特别是对于低常规解决方案而言。