Recent experiments have shown that deep networks can approximate solutions to high-dimensional PDEs, seemingly escaping the curse of dimensionality. However, questions regarding the theoretical basis for such approximations, including the required network size, remain open. In this paper, we investigate the representational power of neural networks for approximating solutions to linear elliptic PDEs with Dirichlet boundary conditions. We prove that when a PDE's coefficients are representable by small neural networks, the parameters required to approximate its solution scale polynomially with the input dimension $d$ and proportionally to the parameter counts of the coefficient networks. To this we end, we develop a proof technique that simulates gradient descent (in an appropriate Hilbert space) by growing a neural network architecture whose iterates each participate as sub-networks in their (slightly larger) successors, and converge to the solution of the PDE. We bound the size of the solution, showing a polynomial dependence on $d$ and no dependence on the volume of the domain.
翻译:最近的实验显示,深海网络可以近似于高维PDE的解决方案,似乎可以避开维度的诅咒。 但是,关于这种近似理论基础,包括所需网络大小的理论基础的问题仍然开放。 在本文件中,我们调查了神经网络的代表性力量,以近似于Drichlet边界条件的线性椭圆形PDE的近似解决方案。我们证明,当一个PDE的系数可由小型神经网络代表时,用输入维度($dd)和与系数网络参数计数成比例来比较其解决方案规模所需的参数。我们为此,我们开发了一种模拟梯度下(在适当的Hilbert空间)的验证技术,通过培养一个神经网络结构,每个神经网络作为子网络在其(稍大一点的)后继者中作为子网络参与,并接近PDE的解决方案。我们将解决方案的大小绑定了,显示对美元的多边依赖度和不依赖域量。