A core issue with learning to optimize neural networks has been the lack of generalization to real world problems. To address this, we describe a system designed from a generalization-first perspective, learning to update optimizer hyperparameters instead of model parameters directly using novel features, actions, and a reward function. This system outperforms Adam at all neural network tasks including on modalities not seen during training. We achieve 2x speedups on ImageNet, and a 2.5x speedup on a language modeling task using over 5 orders of magnitude more compute than the training tasks.
翻译:学习优化神经网络的一个核心问题是,没有将信息概括到现实世界的问题。为了解决这个问题,我们描述一个从一般化第一角度设计的系统,学习使用新特点、行动和奖励功能直接更新优化超参数而不是模型参数。这个系统在所有神经网络任务中都优于亚当,包括培训期间没有看到的模式。我们在图像网络上实现了2x超速,并用比培训任务更精确的5个数量级以上语言模型任务加速了2.5x。