Meta-learning empowers artificial intelligence to increase its efficiency by learning how to learn. Unlocking this potential involves overcoming a challenging meta-optimisation problem. We propose an algorithm that tackles this problem by letting the meta-learner teach itself. The algorithm first bootstraps a target from the meta-learner, then optimises the meta-learner by minimising the distance to that target under a chosen (pseudo-)metric. Focusing on meta-learning with gradients, we establish conditions that guarantee performance improvements and show that the metric can control meta-optimisation. Meanwhile, the bootstrapping mechanism can extend the effective meta-learning horizon without requiring backpropagation through all updates. We achieve a new state-of-the art for model-free agents on the Atari ALE benchmark and demonstrate that it yields both performance and efficiency gains in multi-task meta-learning. Finally, we explore how bootstrapping opens up new possibilities and find that it can meta-learn efficient exploration in an epsilon-greedy Q-learning agent, without backpropagating through the update rule.
翻译:元学习使人工智能能够通过学习学习来提高效率。 解锁这一潜力需要克服具有挑战性的元优化问题。 我们提出一种算法,通过让元 Learner自学来解决这个问题。 算法第一靴将一个目标从Met- Learner上绑起来, 然后在选择的( 假的) 度量法下,通过最小化与该目标的距离, 来优化元 Learner。 侧重于使用梯度的元学习, 我们建立保证性能改进的条件, 并显示该指标能够控制元优化。 同时, 靴子机制可以扩大有效的元学习视野, 而不要求通过所有更新进行反向调整。 我们在 Atari ALE 基准上为无型代理实现一种新的状态, 并展示它通过多塔克的元学习带来业绩和效率收益。 最后, 我们探索制鞋如何打开新的可能性, 并发现它可以在epslon- greedi- Q 学习代理中进行元-lean 高效的探索。