A common goal throughout science and engineering is to solve optimization problems constrained by computational models. However, in many cases a high-fidelity numerical emulation of systems cannot be optimized due to code complexity and computational costs which prohibit the use of intrusive and many query algorithms. Rather, lower-fidelity models are constructed to enable intrusive algorithms for large-scale optimization. As a result of the discrepancy between high and low-fidelity models, optimal solutions determined using low-fidelity models are frequently far from true optimality. In this article we introduce a novel approach that uses post-optimality sensitivities with respect to model discrepancy to update the optimization solution. Limited high-fidelity data is used to calibrate the model discrepancy in a Bayesian framework which in turn is propagated through post-optimality sensitivities of the low-fidelity optimization problem. Our formulation exploits structure in the post-optimality sensitivity operator to achieve computational scalability. Numerical results demonstrate how an optimal solution computed using a low-fidelity model may be significantly improved with limited evaluations of a high-fidelity model.
翻译:暂无翻译