We propose a scalable framework for the learning of high-dimensional parametric maps via adaptively constructed residual network (ResNet) maps between reduced bases of the inputs and outputs. When just few training data are available, it is beneficial to have a compact parametrization in order to ameliorate the ill-posedness of the neural network training problem. By linearly restricting high-dimensional maps to informed reduced bases of the inputs, one can compress high-dimensional maps in a constructive way that can be used to detect appropriate basis ranks, equipped with rigorous error estimates. A scalable neural network learning framework is thus to learn the nonlinear compressed reduced basis mapping. Unlike the reduced basis construction, however, neural network constructions are not guaranteed to reduce errors by adding representation power, making it difficult to achieve good practical performance. Inspired by recent approximation theory that connects ResNets to sequential minimizing flows, we present an adaptive ResNet construction algorithm. This algorithm allows for depth-wise enrichment of the neural network approximation, in a manner that can achieve good practical performance by first training a shallow network and then adapting. We prove universal approximation of the associated neural network class for $L^2_\nu$ functions on compact sets. Our overall framework allows for constructive means to detect appropriate breadth and depth, and related compact parametrizations of neural networks, significantly reducing the need for architectural hyperparameter tuning. Numerical experiments for parametric PDE problems and a 3D CFD wing design optimization parametric map demonstrate that the proposed methodology can achieve remarkably high accuracy for limited training data, and outperformed other neural network strategies we compared against.
翻译:我们提出一个可扩缩的框架,以便通过适应性构建的剩余网络(ResNet)地图,在减少投入和产出的基数之间学习高维参数图。当只有很少的培训数据可用时,为了改善神经网络培训问题的不良位置,我们提出一个可扩缩的框架;通过线性地将高维地图限制在知情性地减少投入的基数上,我们可以以建设性的方式压缩高维地图,以便用来探测适当的基数,并配有严格的误差估计。因此,可扩缩的神经网络学习框架是学习非线性压缩的降低基数绘图。然而,与缩小基础的构造不同,神经网络的建设不能保证通过增加代表力减少错误,从而难以取得良好的实际性能。根据最近将ResNet与顺序最小化流相连接的近效理论,我们提出了一个适应性ResNet的构建算法。这种算法可以使神经网络的近似性更新,通过首先训练浅度网络,然后调整适应性地实现良好的实际性工作。我们证明,相对于降低基础基础的精确度的精确性网络的通用近似近度功能,可以使我们的精确度网络升级性设计方法能够实现更精确地测量。