Meta-learning has enabled learning statistical models that can be quickly adapted to new prediction tasks. Motivated by use-cases in personalized federated learning, we study the often overlooked aspect of the modern meta-learning algorithms -- their data efficiency. To shed more light on which methods are more efficient, we use techniques from algorithmic stability to derive bounds on the transfer risk that have important practical implications, indicating how much supervision is needed and how it must be allocated for each method to attain the desired level of generalization. Further, we introduce a new simple framework for evaluating meta-learning methods under a limit on the available supervision, conduct an empirical study of MAML, Reptile, and Protonets, and demonstrate the differences in the behavior of these methods on few-shot and federated learning benchmarks. Finally, we propose active meta-learning, which incorporates active data selection into learning-to-learn, leading to better performance of all methods in the limited supervision regime.
翻译:元学习使学习的统计模型能够迅速适应新的预测任务。在个人化联合学习中的使用案例的推动下,我们研究了现代元学习算法经常被忽略的方面 -- -- 数据效率。为了更清楚地了解哪些方法更有效率,我们利用算法稳定性的技术来确定具有重要实际影响的转移风险的界限,指出需要多少监督,如何为每种方法分配监督,以达到理想的概括化水平。此外,我们引入了一个新的简单框架,在现有监督的限制下评价元学习方法,对MAML、Reptile和Protonets进行经验性研究,并展示这些方法在少发和被封化学习基准上的不同行为。最后,我们提出积极的元学习,将积极的数据选择纳入学习到学习,从而在有限的监督制度中更好地运用所有方法。