Efficiently solving constrained optimization problems is crucial for numerous real-world applications, yet traditional solvers are often computationally prohibitive for real-time use. Machine learning-based approaches have emerged as a promising alternative to provide approximate solutions at faster speeds, but they struggle to strictly enforce constraints, leading to infeasible solutions in practice. To address this, we propose the Feasibility-Seeking Neural Network (FSNet), which integrates a feasibility-seeking step directly into its solution procedure to ensure constraint satisfaction. This feasibility-seeking step solves an unconstrained optimization problem that minimizes constraint violations in a differentiable manner, enabling end-to-end training and providing guarantees on feasibility and convergence. Our experiments across a range of different optimization problems, including both smooth/nonsmooth and convex/nonconvex problems, demonstrate that FSNet can provide feasible solutions with solution quality comparable to (or in some cases better than) traditional solvers, at significantly faster speeds.
翻译:高效求解约束优化问题对于众多实际应用至关重要,然而传统求解器在实时使用中通常计算成本过高。基于机器学习的方法已成为一种有前景的替代方案,能以更快的速度提供近似解,但它们难以严格强制执行约束,导致实践中出现不可行解。为解决此问题,我们提出了可行性寻求神经网络(FSNet),该网络将可行性寻求步骤直接集成到其求解过程中,以确保约束满足。此可行性寻求步骤以可微分方式求解一个最小化约束违反的无约束优化问题,从而实现端到端训练,并提供可行性与收敛性保证。我们在包括光滑/非光滑及凸/非凸问题在内的一系列不同优化问题上的实验表明,FSNet能够以显著更快的速度提供可行解,且其解质量与传统求解器相当(在某些情况下甚至更优)。