We present a novel deep learning approach to approximate the solution of large, sparse, symmetric, positive-definite linear systems of equations. These systems arise from many problems in applied science, e.g., in numerical methods for partial differential equations. Algorithms for approximating the solution to these systems are often the bottleneck in problems that require their solution, particularly for modern applications that require many millions of unknowns. Indeed, numerical linear algebra techniques have been investigated for many decades to alleviate this computational burden. Recently, data-driven techniques have also shown promise for these problems. Motivated by the conjugate gradients algorithm that iteratively selects search directions for minimizing the matrix norm of the approximation error, we design an approach that utilizes a deep neural network to accelerate convergence via data-driven improvement of the search directions. Our method leverages a carefully chosen convolutional network to approximate the action of the inverse of the linear operator up to an arbitrary constant. We train the network using unsupervised learning with a loss function equal to the $L^2$ difference between an input and the system matrix times the network evaluation, where the unspecified constant in the approximate inverse is accounted for. We demonstrate the efficacy of our approach on spatially discretized Poisson equations with millions of degrees of freedom arising in computational fluid dynamics applications. Unlike state-of-the-art learning approaches, our algorithm is capable of reducing the linear system residual to a given tolerance in a small number of iterations, independent of the problem size. Moreover, our method generalizes effectively to various systems beyond those encountered during training.
翻译:我们提出了一种新的深层次的学习方法,以近似大、稀少、对称、正定义的线性等式系统的解决办法。这些系统产生于应用科学的许多问题,例如部分差异方程式的数值方法。接近这些系统的解决办法的算法往往是问题中的瓶颈,特别是需要解决的现代应用软件。事实上,数字线性代数技术已经调查了几十年,以减轻这一计算负担。最近,数据驱动技术也显示了这些问题的希望。由迭代选择方向以尽量减少近似差错矩阵规范的调控梯度算法所激发的。我们设计了一种方法,通过数据驱动改进搜索方向来加速趋同的深度神经网络。我们的方法利用一个精心选择的革命网络,将线性操作者反向直线性固定的运行器的动作加以比较。我们用不超超超的学习功能来训练网络,其损失函数与美元2美元等值的细度梯度梯度变变变变法,在不固定的系统直径直径直线性应用方法中,我们设计了一种不固定的直径直方程方法,在不断显示我们平方位的矩阵的矩阵的计算方法中,我们不断的算法的算的系统,在不断的系统上,我们测地算法的系统上,我们测算法的精度方法是显示了我们的精确的精度。