Gradient-based hyperparameter optimization has earned a widespread popularity in the context of few-shot meta-learning, but remains broadly impractical for tasks with long horizons (many gradient steps), due to memory scaling and gradient degradation issues. A common workaround is to learn hyperparameters online, but this introduces greediness which comes with a significant performance drop. We propose forward-mode differentiation with sharing (FDS), a simple and efficient algorithm which tackles memory scaling issues with forward-mode differentiation, and gradient degradation issues by sharing hyperparameters that are contiguous in time. We provide theoretical guarantees about the noise reduction properties of our algorithm, and demonstrate its efficiency empirically by differentiating through $\sim 10^4$ gradient steps of unrolled optimization. We consider large hyperparameter search ranges on CIFAR-10 where we significantly outperform greedy gradient-based alternatives, while achieving $\times 20$ speedups compared to the state-of-the-art black-box methods. Code is available at: \url{https://github.com/polo5/FDS}
翻译:由于记忆缩放和梯度退化问题,基于梯度的超强参数优化在微小的元学习中赢得了广泛流行,但由于记忆缩放和梯度退化问题,对于长地平线任务(许多梯度步骤)仍然普遍不切实际。一个共同的变通办法是在线学习超参数,但是这带来了贪婪,随着性能的大幅下降。我们建议采用共享(FDS)这一简单而有效的算法,通过共享连接时间的超光度参数,解决与远地摩度差异和梯度退化有关的记忆缩放问题。我们为我们的算法的减少噪音特性提供理论保障,并以经验方式展示其效率,通过不滚动优化的10+4美元梯度步骤加以区分。我们考虑在CIFAR-10上大规模超参数搜索范围,我们在那里大大优于贪婪梯度基替代方法,同时实现与州-艺术黑箱方法相比的20美元的加速度。代码见:http://github.com/polo5/FDS}