We present a numerical method to efficiently solve optimization problems governed by large-scale nonlinear systems of equations, including discretized partial differential equations, using projection-based reduced-order models accelerated with hyperreduction (empirical quadrature) and embedded in a trust-region framework that guarantees global convergence. The proposed framework constructs a hyperreduced model on-the-fly during the solution of the optimization problem, which completely avoids an offline training phase. This ensures all snapshot information is collected along the optimization trajectory, which avoids wasting samples in remote regions of the parameters space that are never visited, and inherently avoids the curse of dimensionality of sampling in a high-dimensional parameter space. At each iteration of the proposed algorithm, a reduced basis and empirical quadrature weights are constructed precisely to ensure the global convergence criteria of the trust-region method are satisfied, ensuring global convergence to a local minimum of the original (unreduced) problem. Numerical experiments are performed on two fluid shape optimization problems to verify the global convergence of the method and demonstrate its computational efficiency; speedups over 18x (accounting for all computational cost, even cost that is traditionally considered "offline" such as snapshot collection and data compression) relative to standard optimization approaches that do not leverage model reduction are shown.
翻译:我们提出了一个数字方法,以有效解决由大型非线性方程式系统所制约的优化问题,包括分散的局部偏差方程式,使用以超减速加速的投影减序模型,以加速超减速(经验二次曲线)并嵌入一个信任区域框架,保证全球趋同。拟议框架在解决优化问题期间构建了一个超降的现场模型,完全避免了离线培训阶段。这确保了在优化轨道上收集所有快照信息,避免在从未访问的参数空间的偏远地区浪费样本,并避免了高维参数空间取样的多层面性。在拟议的算法的每一次迭代、一个减少的基础和实验性二次等重中,精确地构建了确保信任区域方法的全球趋同标准得到满足,确保全球与原始(不受限制的)问题的最低当地趋同。在两个液态优化问题上进行了微调的实验,以核实方法的全球趋同,并展示其计算效率;在18x(计算所有计算成本的计算模型上加快了18x(所有计算成本的计算,甚至模拟模拟模拟的模拟的模拟是模拟,模拟的模拟的压缩成本,通常认为是模拟的模拟的模拟的模拟的模拟的压压压。