The success of meta-learning on existing benchmarks is predicated on the assumption that the distribution of meta-training tasks covers meta-testing tasks. Frequent violation of the assumption in applications with either insufficient tasks or a very narrow meta-training task distribution leads to memorization or learner overfitting. Recent solutions have pursued augmentation of meta-training tasks, while it is still an open question to generate both correct and sufficiently imaginary tasks. In this paper, we seek an approach that up-samples meta-training tasks from the task representation via a task up-sampling network. Besides, the resulting approach named Adversarial Task Up-sampling (ATU) suffices to generate tasks that can maximally contribute to the latest meta-learner by maximizing an adversarial loss. On few-shot sine regression and image classification datasets, we empirically validate the marked improvement of ATU over state-of-the-art task augmentation strategies in the meta-testing performance and also the quality of up-sampled tasks.
翻译:现有基准的元学习的成功取决于以下假设:元培训任务的分配涵盖元测试任务; 经常违反任务不足或培训任务分配过于狭窄的应用中的假设,导致记忆化或学习过度; 近期解决方案力求扩大元培训任务,而要产生正确和充分想象的任务仍是一个未决问题; 在本文件中,我们寻求一种方法,通过任务抽样网络,从任务代表中将元培训任务纳入到任务说明中; 此外,由此产生的方法,即Aversarial任务更新抽样(ATU),足以产生任务,通过最大限度地增加对抗性损失,为最新的元左上半部作出最大贡献。 关于几度回归和图像分类数据集,我们用经验验证了ATU在元测试业绩中相对于最新任务强化战略的显著改进,以及抽样任务的质量。