Recent empirical results show that deep networks can approximate solutions to high dimensional PDEs, seemingly escaping the curse of dimensionality. However many open questions remain regarding the theoretical basis for such approximations, including the number of parameters required. In this paper, we investigate the representational power of neural networks for approximating solutions to linear elliptic PDEs with Dirichlet Boundary conditions. We prove that when a PDE's coefficients are representable by small neural networks, the parameters required to approximate its solution scale polynomially with the input dimension $d$ and are proportional to the parameter counts of the coefficient neural networks. Our proof is based on constructing a neural network which simulates gradient descent in an appropriate Hilbert space which converges to the solution of the PDE. Moreover, we bound the size of the neural network needed to represent each iterate in terms of the neural network representing the previous iterate, resulting in a final network whose parameters depend polynomially on $d$ and does not depend on the volume of the domain.
翻译:最近的实证结果表明,深层网络可以近似高维PDE的解决方案,似乎可以避开维度的诅咒。 但是,关于这种近似理论基础,包括所需参数的数量,仍有许多尚未解决的问题。 在本文中,我们调查了神经网络的代表性力量,以近似于线性椭圆式 PDE 的解决方案,而Drichlet 边界条件。我们证明,当一个PDE 的系数可由小型神经网络代表时,其溶解规模与输入维度多元化所需的参数($d),并与系数神经网络的参数计数成成正比。我们的证据是建立一个神经网络,在适当的Hilbert 空间模拟梯度下降,该神经网络与PDE 的解决方案相融合。此外,我们把神经网络的大小加以约束,以代表前一神经网络的神经网络为代表,最终网络的参数取决于美元,并不取决于域量。