The goal of data-free meta-learning is to learn useful prior knowledge from a collection of pre-trained models without accessing their training data. However, existing works only solve the problem in parameter space, which (i) ignore the fruitful data knowledge contained in the pre-trained models; (ii) can not scale to large-scale pre-trained models; (iii) can only meta-learn pre-trained models with the same network architecture. To address those issues, we propose a unified framework, dubbed PURER, which contains: (1) ePisode cUrriculum inveRsion (ECI) during data-free meta training; and (2) invErsion calibRation following inner loop (ICFIL) during meta testing. During meta training, we propose ECI to perform pseudo episode training for learning to adapt fast to new unseen tasks. Specifically, we progressively synthesize a sequence of pseudo episodes by distilling the training data from each pre-trained model. The ECI adaptively increases the difficulty level of pseudo episodes according to the real-time feedback of the meta model. We formulate the optimization process of meta training with ECI as an adversarial form in an end-to-end manner. During meta testing, we further propose a simple plug-and-play supplement-ICFIL-only used during meta testing to narrow the gap between meta training and meta testing task distribution. Extensive experiments in various real-world scenarios show the superior performance of ours.
翻译:无数据元学习的目标是在不访问训练数据的情况下,从一组预训练模型中学习有用的先验知识。然而,现有的方法仅在参数空间中解决这个问题,忽略了预训练模型中所包含的丰富数据知识,并且不能扩展到大规模预训练模型。此外,现有方法仅能元学习具有相同网络架构的预训练模型。为了解决这些问题,我们提出了一个统一的框架,即「Purer」,其中包括: (1) 在无数据元训练过程中使用「伪episode训练」(ECI); (2) 在元测试过程中使用「内循环校准」(ICFIL)。在元训练时,我们使用 ECI 为了学习快速适应新的未见任务而进行「伪episode训练」。具体来说,我们逐步合成一系列的「伪episode」,通过提炼每个预训练模型中的训练数据来实现。 ECI 根据元模型的实时反馈适应性地提高伪episode的难度水平。我们将 ECI 的元训练优化过程以对抗形式在端对端的方式下进行。在元测试过程中,我们进一步提出了一种简单的插件-仅使用 ICFIL,它只在元测试过程中使用,旨在缩小元训练和元测试任务分布之间的差距。在各种实际场景的广泛实验中,我们的方法表现出了优越的性能。