Optimization is a ubiquitous modeling tool and is often deployed in settings which repeatedly solve similar instances of the same problem. Amortized optimization methods use learning to predict the solutions to problems in these settings. This leverages the shared structure between similar problem instances. In this tutorial, we will discuss the key design choices behind amortized optimization, roughly categorizing 1) models into fully-amortized and semi-amortized approaches, and 2) learning methods into regression-based and objective-based. We then view existing applications through these foundations to draw connections between them, including for manifold optimization, variational inference, sparse coding, meta-learning, control, reinforcement learning, convex optimization, and deep equilibrium networks. This framing enables us easily see, for example, that the amortized inference in variational autoencoders is conceptually identical to value gradients in control and reinforcement learning as they both use fully-amortized models with an objective-based loss. The source code for this tutorial is available at https://www.github.com/facebookresearch/amortized-optimization-tutorial
翻译:优化是一种无处不在的建模工具,常常被部署在反复解决相同问题类似实例的环境下。 摊销优化方法使用学习来预测这些环境中的问题解决方案。 这在类似问题实例之间牵动了共同的结构。 在这个教义中, 我们将讨论摊销优化背后的关键设计选择, 大致将1个模型分为完全摊销和半摊销方法, 2) 学习方法分为基于回归和客观的学习方法。 然后我们通过这些基础查看现有的应用程序, 以在它们之间建立联系, 包括多重优化、 变异感应、 分散的连结、 元学习、 控制、 强化学习、 共振动优化和深平衡网络。 这个框架让我们很容易看到, 例如, 变异自动电解器中的振动在概念上与控制和加强学习过程中使用的基于客观损失的完全摊销模型在概念上相同。 此教学的源代码可以在 https://www.github.com/facebookreearsear-amortimatimation-imationtalational https://www. asketrodustrain- amtialization- exear- exear.com/ amtiont