Accurate approximation of scalar-valued functions from sample points is a key task in computational science. Recently, machine learning with Deep Neural Networks (DNNs) has emerged as a promising tool for scientific computing, with impressive results achieved on problems where the dimension of the data or problem domain is large. This work broadens this perspective, focusing on approximating functions that are Hilbert-valued, i.e. take values in a separable, but typically infinite-dimensional, Hilbert space. This arises in science and engineering problems, in particular those involving solution of parametric Partial Differential Equations (PDEs). Such problems are challenging: 1) pointwise samples are expensive to acquire, 2) the function domain is high dimensional, and 3) the range lies in a Hilbert space. Our contributions are twofold. First, we present a novel result on DNN training for holomorphic functions with so-called hidden anisotropy. This result introduces a DNN training procedure and full theoretical analysis with explicit guarantees on error and sample complexity. The error bound is explicit in three key errors occurring in the approximation procedure: the best approximation, measurement, and physical discretization errors. Our result shows that there exists a procedure (albeit non-standard) for learning Hilbert-valued functions via DNNs that performs as well as, but no better than current best-in-class schemes. It gives a benchmark lower bound for how well DNNs can perform on such problems. Second, we examine whether better performance can be achieved in practice through different types of architectures and training. We provide preliminary numerical results illustrating practical performance of DNNs on parametric PDEs. We consider different parameters, modifying the DNN architecture to achieve better and competitive results, comparing these to current best-in-class schemes.
翻译:抽样点的标值函数的精确近似值是计算科学中的一项关键任务。 最近,与深神经网络(DNNS)的机器学习已成为一个很有希望的科学计算工具。 最近,与深神经网络(DNNS)的机器学习已成为一个很有希望的科学计算工具,在数据或问题域范围大的问题上取得了令人印象深刻的成果。 这项工作拓宽了这一视角, 侧重于Hilbert估值的近似值, 即以一个可分离但通常无限的维度( Hilbert 空间) 的值。 这产生于科学和工程问题, 特别是涉及部分偏差(DPDEs)解决方案的解决方案。 这些问题具有挑战性:1 点性样本是昂贵的, 获取的样本是昂贵的, 2 功能领域是高度的。 我们的贡献是双重的。 首先, DNNNE培训的通俗函数具有所谓的隐性异常的隐性。 这个结果引入了DNNE培训程序和全面理论分析, 明确保证错误和样本的复杂性。 在近距离程序上出现的三个关键错误是: 最接近的路径是, 我们的硬性标准的运行, 显示的是, 最接近性标准的运行,, 的运行过程是最佳的运行,,, 我们的不透明性标准的运行的运行的运行的运行的机能,,,, 我们的机能的运行的运行的运行的运行的运行的运行的运行的机的机的运行的运行的机的机的机的机能的机能,,, 我们的机能, 的机能的机能的机能, 我们的机能的机能, 我们的机能的机能的机能的机能的机能的机能的机能的机能, 我们的机能, 的机能的机能的机能的机能的机能, 我们的机能的机能的机能的机能的机能的机能的机能的机能, 我们的机能的机能, 我们的机能, 我们的机能的机能的机能的机能, 我们的机能的机能的机能的机能的机能的机能的机能的机能的机能的机能的机能的机能的机能