Recently, deep Convolutional Neural Networks (CNNs) have proven to be successful when employed in areas such as reduced order modeling of parametrized PDEs. Despite their accuracy and efficiency, the approaches available in the literature still lack a rigorous justification on their mathematical foundations. Motivated by this fact, in this paper we derive rigorous error bounds for the approximation of nonlinear operators by means of CNN models. More precisely, we address the case in which an operator maps a finite dimensional input $\boldsymbol{\mu}\in\mathbb{R}^{p}$ onto a functional output $u_{\boldsymbol{\mu}}:[0,1]^{d}\to\mathbb{R}$, and a neural network model is used to approximate a discretized version of the input-to-output map. The resulting error estimates provide a clear interpretation of the hyperparameters defining the neural network architecture. All the proofs are constructive, and they ultimately reveal a deep connection between CNNs and the Fourier transform. Finally, we complement the derived error bounds by numerical experiments that illustrate their application.
翻译:最近,深层进化神经网络(CNNs)在诸如光化PDE的减序模型等领域被使用时证明是成功的。尽管其准确性和效率很高,文献中可用的方法仍然缺乏数学基础的严格理由。受此事实的启发,我们在本文中得出非线性操作者通过CNN模型接近非线性操作者的严格错误界限。更确切地说,我们处理的是一个操作者将一定的维度输入量绘制成一个数量为$\boldsymbol {muñ\ mathb{R ⁇ p}在功能性输出 $u ⁇ boldsysymbol_mu}:[0,1,d ⁇ to\\mathb{R}}}的案例中,文献中所使用的方法仍然缺乏严格的数学基础。结果错误估计为确定神经网络结构的超参数提供了明确的解释。所有证据都是建设性的,它们最终揭示CNNs和4ier变换之间的深度连接。最后,我们用数字实验来补充所得出的错误。