We study the connection between gradient-based meta-learning and convex op-timisation. We observe that gradient descent with momentum is a special case of meta-gradients, and building on recent results in optimisation, we prove convergence rates for meta-learning in the single task setting. While a meta-learned update rule can yield faster convergence up to constant factor, it is not sufficient for acceleration. Instead, some form of optimism is required. We show that optimism in meta-learning can be captured through Bootstrapped Meta-Gradients (Flennerhag et al., 2022), providing deeper insight into its underlying mechanics.
翻译:我们研究了基于梯度的元学习和Fonvex实际激励之间的联系。我们观察到,具有势头的梯度下降是元进化的一个特例,并且以最近的优化成果为基础,我们证明在单一任务设置中,元学习的趋同率。虽然元学习更新规则可以更快地趋同,达到恒定系数,但不足以加速。相反,需要某种形式的乐观。我们表明,通过推进Meta-Gradients(Flennerhag等人,2022年),可以捕捉到元进化学习的乐观,更深入地了解其基本原理。