We derive rigorous bounds on the error resulting from the approximation of the solution of parametric hyperbolic scalar conservation laws with ReLU neural networks. We show that the approximation error can be made as small as desired with ReLU neural networks that overcome the curse of dimensionality. In addition, we provide an explicit upper bound on the generalization error in terms of the training error, number of training samples and the neural network size. The theoretical results are illustrated by numerical experiments.
翻译:我们从ReLU神经网络的参数双曲天平保护法解决方案的近似解决方案中得出对错误的严格界限。 我们证明近似误差可以像ReLU神经网络所希望的那样小一些,这些网络可以克服维度的诅咒。 此外,我们提供了在培训错误、培训样本数量和神经网络大小方面对一般误差的明确的上层界限。 实验用数字来说明理论结果。