In this paper, we propose a new Hessian inverse free Fully Single Loop Algorithm (FSLA) for bilevel optimization problems. Classic algorithms for bilevel optimization admit a double loop structure which is computationally expensive. Recently, several single loop algorithms have been proposed with optimizing the inner and outer variable alternatively. However, these algorithms not yet achieve fully single loop. As they overlook the loop needed to evaluate the hyper-gradient for a given inner and outer state. In order to develop a fully single loop algorithm, we first study the structure of the hyper-gradient and identify a general approximation formulation of hyper-gradient computation that encompasses several previous common approaches, e.g. back-propagation through time, conjugate gradient, \emph{etc.} Based on this formulation, we introduce a new state variable to maintain the historical hyper-gradient information. Combining our new formulation with the alternative update of the inner and outer variables, we propose an efficient fully single loop algorithm. We theoretically show that the error generated by the new state can be bounded and our algorithm converges with the rate of $O(\epsilon^{-2})$. Finally, we verify the efficacy our algorithm empirically through multiple bilevel optimization based machine learning tasks.
翻译:在本文中, 我们为双级优化问题推荐一个新的赫森反向自由完全单环 Algorithm (FSLA) 。 双级优化的经典算法允许一种计算成本昂贵的双环结构。 最近, 提出了数个优化内部和外部变量的单一环算法 。 但是, 这些算法尚未实现完全的单一循环 。 由于它们忽略了评估特定内部和外部状态的超梯度所需的循环 。 为了开发一个完全单一的循环算法, 我们首先研究超梯度结构, 并找出高梯度计算的总近似公式, 其中包括一些先前的共同方法, 例如: 通过时间、 共振梯梯梯度、\emph{etc.} 。 最近, 提出了几个单一环算法 。 但是, 这些算法还没有实现完全的单一循环。 由于它们忽略了评估特定内部和外部状态的超梯值所需的循环。 为了开发一个完全单一的循环算法, 我们从理论上表明, 由新状态产生的错误可以被约束, 并且我们的超梯度计算法包含前几个共同方法, 例如: 通过时间、 共振梯梯度梯值梯值梯值梯值梯值梯值梯值的校校校校校校校校 。