Optimization constrained by computational models is common across science and engineering. However, in many cases a high-fidelity model of the system cannot be optimized due to its complexity and computational cost. Rather, a low(er)-fidelity model is constructed to enable intrusive and many query algorithms needed for large-scale optimization. As a result of the discrepancy between the high and low-fidelity models, the optimal solution determined using the low-fidelity model is frequently far from true optimality. In this article we introduce a novel approach which uses limited high-fidelity data to calibrate the model discrepancy in a Bayesian framework and propagate it through the optimization problem. The result provides both an improvement in the optimal solution and a characterization of uncertainty due to the limited accessibility of high-fidelity data. Our formulation exploits structure in the post-optimality sensitivity operator to ensure computational scalability. Numerical results demonstrate how an optimal solution computed using a low-fidelity model may be significantly improved with as little as one evaluation of a high-fidelity model.
翻译:由计算模型限制的最佳化在科学和工程中是常见的,但在许多情况下,由于系统的复杂性和计算成本,无法优化其高度忠诚模型,相反,为了能够进行大规模优化所需的侵入性和多种查询算法,构建了一个低(er)-忠诚模型,以便进行大规模优化所需的入侵性和多种查询算法。由于高和低忠诚模型之间的差异,使用低忠诚模型确定的最佳解决办法往往远非真正最佳的。在本篇文章中,我们采用了一种新颖的办法,即使用有限的高忠诚数据校准贝叶斯框架的模型差异,并通过优化问题加以传播。结果既改进了最佳解决办法,又由于高忠诚数据的可获取性有限,对不确定性作了定性。我们的配方利用了后忠诚度敏感操作器的结构,以确保计算性。数字结果表明,使用低忠诚模型计算的最佳解决办法如何大为改进,只对高忠诚模型进行一次评估。