As demonstrated in many areas of real-life applications, neural networks have the capability of dealing with high dimensional data. In the fields of optimal control and dynamical systems, the same capability was studied and verified in many published results in recent years. Towards the goal of revealing the underlying reason why neural networks are capable of solving some high dimensional problems, we develop an algebraic framework and an approximation theory for compositional functions and their neural network approximations. The theoretical foundation is developed in a way so that it supports the error analysis for not only functions as input-output relations, but also numerical algorithms. This capability is critical because it enables the analysis of approximation errors for problems for which analytic solutions are not available, such as differential equations and optimal control. We identify a set of key features of compositional functions and the relationship between the features and the complexity of neural networks. In addition to function approximations, we prove several formulae of error upper bounds for neural networks that approximate the solutions to differential equations, optimization, and optimal control.
翻译:正如在现实应用的许多领域所显示的那样,神经网络有能力处理高维数据。在最佳控制和动态系统领域,近年来在许多公布的结果中研究并核实了同样的能力。为了揭示神经网络能够解决某些高维问题的根本原因,我们为组成功能及其神经网络近似值开发了代数框架和近似理论。理论基础的发展方式不仅支持作为输入-输出关系的错误分析功能,而且支持数字算法。这一能力至关重要,因为它能够分析无法找到解析性解决办法的问题的近似误差,例如差异方程式和最佳控制。我们确定了组成功能的一套关键特征以及神经网络特征和复杂性之间的关系。除了功能近似值外,我们还证明了一些近似差异方程式、优化和最佳控制方法的神经网络错误上界的公式。