We introduce a method to construct a stochastic surrogate model from the results of dimensionality reduction in forward uncertainty quantification. The hypothesis is that the high-dimensional input augmented by the output of a computational model admits a low-dimensional representation. This assumption can be met by numerous uncertainty quantification applications with physics-based computational models. The proposed approach differs from a sequential application of dimensionality reduction followed by surrogate modeling, as we "extract" a surrogate model from the results of dimensionality reduction in the input-output space. This feature becomes desirable when the input space is genuinely high-dimensional. The proposed method also diverges from the Probabilistic Learning on Manifold, as a reconstruction mapping from the feature space to the input-output space is circumvented. The final product of the proposed method is a stochastic simulator that propagates a deterministic input into a stochastic output, preserving the convenience of a sequential "dimensionality reduction + Gaussian process regression" approach while overcoming some of its limitations. The proposed method is demonstrated through two uncertainty quantification problems characterized by high-dimensional input uncertainties.
翻译:本文提出一种基于降维结果构建随机代理模型的方法,用于前向不确定性量化。其核心假设是:计算模型的高维输入与输出联合空间允许低维表示。这一假设在众多基于物理计算模型的不确定性量化应用中得以满足。所提方法不同于先降维后构建代理模型的顺序流程,而是从输入-输出空间的降维结果中“提取”代理模型。当输入空间确实为高维时,这一特性尤为可取。该方法亦区别于流形概率学习方法,因其规避了从特征空间到输入-输出空间的重构映射。最终构建的随机模拟器可将确定性输入传播为随机输出,既保留了“降维+高斯过程回归”顺序方法的便利性,又克服了其部分局限。本文通过两个具有高维输入不确定性的不确定性量化问题验证了所提方法的有效性。