We are motivated by the problem of providing strong generalization guarantees in the context of meta-learning. Existing generalization bounds are either challenging to evaluate or provide vacuous guarantees in even relatively simple settings. We derive a probably approximately correct (PAC) bound for gradient-based meta-learning using two different generalization frameworks in order to deal with the qualitatively different challenges of generalization at the "base" and "meta" levels. We employ bounds for uniformly stable algorithms at the base level and bounds from the PAC-Bayes framework at the meta level. The result is a novel PAC-bound that is tighter when the base learner adapts quickly, which is precisely the goal of meta-learning. We show that our bound provides a tighter guarantee than other bounds on a toy non-convex problem on the unit sphere and a text-based classification example. We also present a practical regularization scheme motivated by the bound in settings where the bound is loose and demonstrate improved performance over baseline techniques.
翻译:我们的动机是在元学习方面提供强有力的一般化保障。现有的一般化约束在评估甚至相对简单的环境下都难以做到,或者提供空置的保障。我们利用两个不同的一般化框架获得一个大概大致正确的(PAC),用于基于梯度的元化学习,以便应对在“基”和“元”层次上普遍化的质的不同挑战。我们在基级和元级PAC-Bayes框架的范围内采用统一稳定的算法的界限。结果是一个创新的PAC约束,当基础学习者迅速适应时更加严格,这恰恰是元学习的目标。我们表明,我们的约束提供了比在单位领域一个玩具非混凝土问题上的其他界限更为严格的保证,并提供了一个基于文本的分类示例。我们还提出了一种由约束在约束松散的环境中驱动的实际的规范化计划,并展示了基线技术的改进性能。