Configuration Optimization Problems (COPs), which involve minimizing a loss function over a set of discrete points $\boldsymbol{\gamma} \subset P$, are common in areas like Model Order Reduction, Active Learning, and Optimal Experimental Design. While exact solutions are often infeasible, heuristic methods such as the Greedy Sampling Method (GSM) provide practical alternatives, particularly for low-dimensional cases. GSM recursively updates $\boldsymbol{\gamma}$ by solving a continuous optimization problem, which is typically approximated by a search over a discrete sample set $S \subset P$. However, as the dimensionality grows, the sample size suffers from the curse of dimensionality. To address this, we introduce the Polytope Division Method (PDM), a scalable greedy-type approach that adaptively partitions the parameter space and targets regions of high loss. PDM achieves linear scaling with problem dimensionality and offers an efficient solution approach for high-dimensional COPs, overcoming the limitations of traditional methods.
翻译:暂无翻译