Automatic differentiation (autodiff) has revolutionized machine learning. It allows expressing complex computations by composing elementary ones in creative ways and removes the burden of computing their derivatives by hand. More recently, differentiation of optimization problem solutions has attracted widespread attention with applications such as optimization layers, and in bi-level problems such as hyper-parameter optimization and meta-learning. However, so far, implicit differentiation remained difficult to use for practitioners, as it often required case-by-case tedious mathematical derivations and implementations. In this paper, we propose a unified, efficient and modular approach for implicit differentiation of optimization problems. In our approach, the user defines directly in Python a function $F$ capturing the optimality conditions of the problem to be differentiated. Once this is done, we leverage autodiff of $F$ and implicit differentiation to automatically differentiate the optimization problem. Our approach thus combines the benefits of implicit differentiation and autodiff. It is efficient as it can be added on top of any state-of-the-art solver and modular as the optimality condition specification is decoupled from the implicit differentiation mechanism. We show that seemingly simple principles allow to recover many exiting implicit differentiation methods and create new ones easily. We demonstrate the ease of formulating and solving bi-level optimization problems using our framework. We also showcase an application to the sensitivity analysis of molecular dynamics.
翻译:自动差异( autodiff) 使机器学习发生了革命性的变化。 它允许通过以创造性方式将基本数据以创造性方式组合成初级数据来表达复杂的计算方法,并消除手工计算衍生物的负担。 最近, 优化问题解决方案的区别吸引了优化层等应用程序的广泛关注, 以及超参数优化和元化学习等双层问题的广泛关注。 但是, 到目前为止, 隐含差异仍然难以用于从业人员, 因为它常常需要逐个案例地重复数学衍生和实施。 在本文件中, 我们建议了一种统一、 高效和模块化的方法, 以隐含优化问题的区分。 在我们的方法中, 用户直接在 Python 中定义了一个功能 $F$, 以捕捉问题的最佳条件加以区别。 一旦这样做, 我们便利用 $F的自动化和隐含差异来自动区分优化问题。 因此, 我们的方法将隐含的区分和自动化的好处结合起来。 高效地因为它可以添加在任何状态的解析和模块上, 因为最佳性条件的规格要求与隐含的区别机制脱下。 我们展示了隐含的精确度的精确度, 我们展示了一种简单的精确的分类分析, 以恢复了我们演示式的模型的演示式分析。