Optimization constrained by computational models is common across science and engineering. However, in many cases a high-fidelity numerical emulation of systems cannot be optimized due to complexity and computational costs. Rather, low-fidelity models are constructed to enable intrusive algorithms for large-scale optimization. As a result of the discrepancy between high and low-fidelity models, optimal solutions determined using low-fidelity models are frequently far from true optimality. In this article we introduce a novel approach that uses post-optimality sensitivities with respect to model discrepancy to enable solutions representative of the true system. Limited high-fidelity data is used to calibrate the model discrepancy in a Bayesian framework which in turn is propagated through the optimization problem. The result provides significant improvement in optimal solutions with uncertainty characterizations. Our formulation exploits structure in the post-optimality sensitivity operator to achieve computational scalability. Numerical results demonstrate how an optimal solution computed using a low-fidelity model may be significantly improved with limited evaluations of a high-fidelity model.
翻译:由计算模型限制的最佳化在科学和工程中是常见的,但在许多情况下,由于复杂和计算成本,无法优化系统的高度忠诚数字模拟。相反,低忠诚模型的构建是为了促成大规模优化的侵入性算法。由于高和低忠诚模型之间的差异,使用低忠诚模型确定的最佳解决方案往往远非真正最佳性。在本篇文章中,我们采用了一种新颖的方法,在模型差异方面使用最佳后敏感度,以便能够代表真实系统的解决方案。使用有限的高忠诚数据校准贝叶斯框架的模型差异,而这种差异又通过优化问题传播。其结果大大改进了不确定性特征的最佳解决方案。我们的配方利用了后忠诚度敏感操作器的结构,以实现计算性弹性。数字结果表明,使用低忠诚模型计算的最佳解决方案如何通过有限评价高忠诚模型得到显著改进。