The residual method with deep neural networks as function parametrization has been applied to solve certain high-dimensional partial differential equations (PDEs) successfully; however, its convergence is slow and might not be guaranteed even within a simple class of PDEs. To improve the convergence of the network-based residual model, we introduce a novel self-paced learning framework, SelectNet, which quantifies the difficulty of training samples, chooses simpler samples in the early stage of training, and slowly explores more challenging samples, e.g., samples with larger residual errors, mimicking the human cognitive process for more efficient learning. In particular, a selection network and the PDE solution network are trained simultaneously; the selection network adaptively weighting the training samples of the solution network achieving the goal of self-paced learning. Numerical examples indicate that the proposed SelectNet model outperforms existing models on the convergence speed and the convergence robustness, especially for low-regularity solutions.
翻译:利用深神经网络的残余方法,作为功能平衡,成功地解决了某些高维部分差异方程式(PDEs);然而,其趋同速度缓慢,即使在简单的PDEs类别中也可能得不到保障。为了改进基于网络的残余模型的趋同,我们引入了一个全新的自定节奏学习框架,即SpecNet,它量化了培训样本的难度,在培训的早期阶段选择了更简单的样本,并缓慢地探索更具挑战性的样本,例如,具有较大残余错误的样本,模仿人类认知过程,以便更有效地学习。特别是,选择网络和PDE解决方案网络同时接受培训;选择网络适应性地对解决方案网络的培训样本加权,以实现自定节奏学习的目标。数字实例表明,拟议的SpecNet模型在趋同速度和趋同强度方面超过了现有的模型,特别是对于常规性解决方案而言。