Ensuring solution feasibility is a key challenge in developing Deep Neural Network (DNN) schemes for solving constrained optimization problems, due to inherent DNN prediction errors. In this paper, we propose a "preventive learning'" framework to systematically guarantee DNN solution feasibility for problems with convex constraints and general objective functions. We first apply a predict-and-reconstruct design to not only guarantee equality constraints but also exploit them to reduce the number of variables to be predicted by DNN. Then, as a key methodological contribution, we systematically calibrate inequality constraints used in DNN training, thereby anticipating prediction errors and ensuring the resulting solutions remain feasible. We characterize the calibration magnitudes and the DNN size sufficient for ensuring universal feasibility. We propose a new Adversary-Sample Aware training algorithm to improve DNN's optimality performance without sacrificing feasibility guarantee. Overall, the framework provides two DNNs. The first one from characterizing the sufficient DNN size can guarantee universal feasibility while the other from the proposed training algorithm further improves optimality and maintains DNN's universal feasibility simultaneously. We apply the preventive learning framework to develop DeepOPF+ for solving the essential DC optimal power flow problem in grid operation. It improves over existing DNN-based schemes in ensuring feasibility and attaining consistent desirable speedup performance in both light-load and heavy-load regimes. Simulation results over IEEE Case-30/118/300 test cases show that DeepOPF+ generates $100\%$ feasible solutions with $<$0.5% optimality loss and up to two orders of magnitude computational speedup, as compared to a state-of-the-art iterative solver.
翻译:由于DNN的内在预测错误,确保解决方案的可行性是制定深神经网络(DNN)计划解决限制优化问题的关键挑战。在本文件中,我们提议了一个“预防性学习”框架,以系统地保证DNN的解决方案对于有细节限制和一般客观功能的问题的可行性。我们首先采用预测和再构设计,不仅保证平等限制,而且利用它们来减少DNN预测的变量数量。然后,作为一项关键的方法贡献,我们系统地校准DNNN培训中使用的不平等限制,从而预测预测预测错误,并确保由此产生的解决方案仍然可行。我们用“预防性学习”框架来描述校准数量和DNNNN的大小,以确保通用可行性。我们提出了一个新的Adversary-Sample Intlening算法,以便在不牺牲可行性保证的情况下改进DNNNW的最佳性绩效。总体而言,框架提供了两个DNNNN的大小特征可以保证普遍可行性,而另一个来自拟议的培训算法可以进一步提高最佳性,同时保持DNN的通用性可行性。我们应用预防性学习框架来将Deepal-dealdeal-dealdeal-dealal-dealal-deallial-deal-deallical-deal-deal-deal-deal-deal-listal-destrutal-cal-stal-stal-stalbal-listal-st orpalpalupalupalupal-st lautus lautus laututus