Alternating Direction Method of Multiplier (ADMM) has been a popular algorithmic framework for separable optimization problems with linear constraints. For numerical ADMM fail to exploit the particular structure of the problem at hand nor the input data information, leveraging task-specific modules (e.g., neural networks and other data-driven architectures) to extend ADMM is a significant but challenging task. This work focuses on designing a flexible algorithmic framework to incorporate various task-specific modules (with no additional constraints) to improve the performance of ADMM in real-world applications. Specifically, we propose Guidance from Optimality (GO), a new customization strategy, to embed task-specific modules into ADMM (GO-ADMM). By introducing an optimality-based criterion to guide the propagation, GO-ADMM establishes an updating scheme agnostic to the choice of additional modules. The existing task-specific methods just plug their task-specific modules into the numerical iterations in a straightforward manner. Even with some restrictive constraints on the plug-in modules, they can only obtain some relatively weaker convergence properties for the resulted ADMM iterations. Fortunately, without any restrictions on the embedded modules, we prove the convergence of GO-ADMM regarding objective values and constraint violations, and derive the worst-case convergence rate measured by iteration complexity. Extensive experiments are conducted to verify the theoretical results and demonstrate the efficiency of GO-ADMM.
翻译:倍增效应方向法(ADMM)是一个通俗的算法框架,用于解决在线性限制下可分离的优化问题。对于数字的ADMMM未能利用手头问题的特殊结构或输入数据信息,利用特定任务模块(如神经网络和其他数据驱动架构)来扩展ADMMM是一项重要但具有挑战性的任务。这项工作的重点是设计一个灵活的算法框架,将各种任务特定模块(无额外制约)纳入到数字转换中,以改善ADMMM在现实应用中的绩效。具体地说,我们建议从最佳程度(GO)角度提出指导意见,即新的定制战略,将任务特定模块嵌入ADMMMM(G-ADMM)中。通过采用基于最佳性的标准来指导传播,GOMMMM(神经网络网络网络网络网络和其他数据驱动的架构),为扩大任务特定模块的选择确定了一个不可忽视的更新计划。现有的特定任务模式只是以简单的方式将其特定任务模块纳入数字转换中。即使对插模块有一些限制性,但我们只能为ADMMMD的顶点,因此获得一些相对较弱的趋紧的趋近的趋同性组合。幸运的是,而我们通过对GMDMDMADMA的深度的深度的模型进行了最接近和最接近性试验,通过GODADADMADM(GOB)的深度的模型进行最接近性试验来证明的深度的难度。