Coarse-scale surrogate models in the context of numerical homogenization of linear elliptic problems with arbitrary rough diffusion coefficients rely on the efficient solution of fine-scale sub-problems on local subdomains whose solutions are then employed to deduce appropriate coarse contributions to the surrogate model. However, in the absence of periodicity and scale separation, the reliability of such models requires the local subdomains to cover the whole domain which may result in high offline costs, in particular for parameter-dependent and stochastic problems. This paper justifies the use of neural networks for the approximation of coarse-scale surrogate models by analyzing their approximation properties. For a prototypical and representative numerical homogenization technique, the Localized Orthogonal Decomposition method, we show that one single neural network is sufficient to approximate the coarse contributions of all occurring coefficient-dependent local sub-problems for a non-trivial class of diffusion coefficients up to arbitrary accuracy. We present rigorous upper bounds on the depth and number of non-zero parameters for such a network to achieve a given accuracy. Further, we analyze the overall error of the resulting neural network enhanced numerical homogenization surrogate model.
翻译:在对任意粗略扩散系数的线性椭圆性问题进行数字同化的情况下,在任意的粗略扩散系数的线性椭圆性问题的数字同化中,粗缩的代金模型的可靠性取决于对当地次域的微规模子问题的有效解决办法,然后采用当地次域的微小问题的解决办法推断出对替代模型的适当粗化贡献;然而,在没有周期和比例分化的情况下,这些模型的可靠性要求当地次域覆盖整个域,这可能导致高离线成本,特别是参数依赖性和随机性的问题。本文通过分析其近似性来证明使用神经网络近似粗缩替代模型的近似神经网络是合理的。对于原型和具有代表性的数字同质化技术,我们用一种地方化正统的Orthogonal Decomposition方法,我们表明一个单一神经网络足以接近所有正在发生的以系数为根据的局部的局部子分子分质性贡献的粗略程度,足以导致任意精确的传播系数。我们对这种网络的非零参数的深度和数量提出了严格的上界限,通过分析它们的近似性特性来达到给定的精确性。