Automatic differentiation (autodiff) has revolutionized machine learning. It allows expressing complex computations by composing elementary ones in creative ways and removes the burden of computing their derivatives by hand. More recently, differentiation of optimization problem solutions has attracted widespread attention with applications such as optimization as a layer, and in bi-level problems such as hyper-parameter optimization and meta-learning. However, the formulas for these derivatives often involve case-by-case tedious mathematical derivations. In this paper, we propose a unified, efficient and modular approach for implicit differentiation of optimization problems. In our approach, the user defines (in Python in the case of our implementation) a function $F$ capturing the optimality conditions of the problem to be differentiated. Once this is done, we leverage autodiff of $F$ and implicit differentiation to automatically differentiate the optimization problem. Our approach thus combines the benefits of implicit differentiation and autodiff. It is efficient as it can be added on top of any state-of-the-art solver and modular as the optimality condition specification is decoupled from the implicit differentiation mechanism. We show that seemingly simple principles allow to recover many recently proposed implicit differentiation methods and create new ones easily. We demonstrate the ease of formulating and solving bi-level optimization problems using our framework. We also showcase an application to the sensitivity analysis of molecular dynamics.
翻译:自动差异( autodiff) 使机器学习发生了革命性的变化。 它允许通过以创造性的方式将基本数据以创造性的方式组合为基本数据来表达复杂的计算方法, 并消除手工计算衍生物的负担。 最近, 优化问题解决方案的区别吸引了广泛的关注, 应用程序包括优化的一层, 以及超参数优化和元学习等双层问题。 然而, 这些衍生物的公式往往涉及逐个案例的重复性数学衍生。 在本文中, 我们建议了一种统一、 高效和模块化的方法, 以隐含的优化问题区分。 在我们的方法中, 用户定义了一种用美元来捕捉问题的最佳性条件。 一旦这样做, 我们就会自动利用美元和隐含的区别来自动区分优化问题。 我们的方法将隐含的区分和自定义的数学衍生物的惠益结合起来。 高效的办法是在任何州级解决方案和模块中添加一种统一、 最佳性规格都与隐含的区分机制脱钩。 我们展示了表面上简单的原则, 从而恢复了我们隐含的优化性框架。