The use of deep learning methods for solving PDEs is a field in full expansion. In particular, Physical Informed Neural Networks, that implement a sampling of the physical domain and use a loss function that penalizes the violation of the partial differential equation, have shown their great potential. Yet, to address large scale problems encountered in real applications and compete with existing numerical methods for PDEs, it is important to design parallel algorithms with good scalability properties. In the vein of traditional domain decomposition methods (DDM), we consider the recently proposed deep-ddm approach. We present an extension of this method that relies on the use of a coarse space correction, similarly to what is done in traditional DDM solvers. Our investigations shows that the coarse correction is able to alleviate the deterioration of the convergence of the solver when the number of subdomains is increased thanks to an instantaneous information exchange between subdomains at each iteration. Experimental results demonstrate that our approach induces a remarkable acceleration of the original deep-ddm method, at a reduced additional computational cost.
翻译:使用深层次学习方法解决PDE是一个全面扩展的领域。特别是,对物理域进行抽样并使用损失功能以惩罚部分差异方程式被违反的行为的有形知情神经网络展示了巨大的潜力。然而,为了解决在实际应用中遇到的大规模问题并与现有的PDE数字方法竞争,必须设计具有良好可缩放特性的平行算法。在传统域分解方法(DDM)的动态中,我们考虑了最近提出的深层分解方法。我们展示了这种方法的延伸,该方法依赖于使用粗体空间校正,类似于传统的DDM解算器。我们的调查显示,当子域数目增加时,粗体校正能够缓解溶剂趋同的恶化,因为子域的数目在每次迭代之间瞬间交流信息。实验结果表明,我们的方法在减少计算成本的情况下,使原始深层方法显著加速。