Learned optimizers are increasingly effective, with performance exceeding that of hand designed optimizers such as Adam~\citep{kingma2014adam} on specific tasks \citep{metz2019understanding}. Despite the potential gains available, in current work the meta-training (or `outer-training') of the learned optimizer is performed by a hand-designed optimizer, or by an optimizer trained by a hand-designed optimizer \citep{metz2020tasks}. We show that a population of randomly initialized learned optimizers can be used to train themselves from scratch in an online fashion, without resorting to a hand designed optimizer in any part of the process. A form of population based training is used to orchestrate this self-training. Although the randomly initialized optimizers initially make slow progress, as they improve they experience a positive feedback loop, and become rapidly more effective at training themselves. We believe feedback loops of this type, where an optimizer improves itself, will be important and powerful in the future of machine learning. These methods not only provide a path towards increased performance, but more importantly relieve research and engineering effort.
翻译:学习优化越来越有效, 其性能超过了手设计的优化器, 例如 Adam ⁇ citep{ kingma2014adam} 的性能超过手设计的优化器, 例如 Adam ⁇ citep{ mingma2014dam} 具体任务 。 尽管现有潜在收益, 在目前的工作中, 学习优化器的元培训( 或“ 外培训 ”) 由手设计的优化器或由手设计的优化器训练的优化器进行。 我们显示, 随机初始化的学习优化器的人口可以在网络上被利用从零到零的训练, 而不必在工艺的任何部分使用手设计的优化器。 一种基于人口的培训形式被用于协调这种自我培训。 尽管随机初始化的优化器最初进展缓慢, 因为它们能改善积极的反馈循环, 并在培训过程中迅速更加有效 。 我们相信, 这种类型的反馈循环, 优化器本身会改善, 在未来机器学习中将变得重要和强大。 这些方法不仅提供提高性能的途径, 更重要的是减轻研究和工程努力。