In this work, we consider the approximation of parametric maps using the so-called Galerkin POD-NN method. This technique combines the computation of a reduced basis via proper orthogonal decomposition (POD) and artificial neural networks (NNs) for the construction of fast surrogates of said parametric maps. We provide a fully discrete error analysis of this method that accounts for different discretization errors, including the number of reduced basis in the approximation of the solution manifold, truncation in the parameter space, and, most importantly, the number of samples in the computation of the reduced space, together with the effect of the use of NNs in the approximation of the reduced coefficients. Following this error analysis we provide a-priori bounds on the required POD tolerance, the resulting POD ranks, and NN-parameters to maintain the order of convergence of quasi Monte Carlo sampling techniques. The main advantage of Galerkin POD-NN over existing traditional techniques is that the online and offline phases of the projection-based reduced basis method are completely decoupled, making the evaluation of the constructed surrogate in run-time considerably faster without the need for an hyper-reduction technique. We conclude this work by showcasing the applicability of this method through a practical industrial application: the sound-soft acoustic scattering problem by a parametrically defined scatterer in three physical dimensions.
翻译:暂无翻译