We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation. For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology. We also include numerical experiments which demonstrate the effectiveness of the method, showing convergence and robustness of the approximation scheme with respect to the size of the discretization, and compare it with existing algorithms from the literature; our examples include the mapping from coefficient to solution in a divergence form elliptic partial differential equation (PDE) problem, and the solution operator for viscous Burgers' equation.
翻译:我们为无限空间之间输入-输出图的数据驱动近似度制定了一个总框架。拟议方法的动机是神经网络和深层次学习最近的成功,加上模型减少的理念。这种组合的结果是神经网络近似度,原则上,这种近似性是在无限空间上定义的,在实践中,对于计算所需的这些空间的有限维度近近似性来说是强有力的。对于一组输入-输出图和对输入的正确选择概率测量,我们证明拟议的近似方法是趋同的。我们还包括一些数字实验,这些实验显示了该方法的有效性,显示了近似法在离散规模方面的趋同性和稳健性,并与文献中的现有算法进行了比较;我们的例子包括从系数到偏差部分偏差方程(PDE)问题的解析方法,以及用于布尔格斯方程式的解决方案操作者。