Meta learning is a promising solution to few-shot learning problems. However, existing meta learning methods are restricted to the scenarios where training and application tasks share the same out-put structure. To obtain a meta model applicable to the tasks with new structures, it is required to collect new training data and repeat the time-consuming meta training procedure. This makes them inefficient or even inapplicable in learning to solve heterogeneous few-shot learning tasks. We thus develop a novel and principled HierarchicalMeta Learning (HML) method. Different from existing methods that only focus on optimizing the adaptability of a meta model to similar tasks, HML also explicitly optimizes its generalizability across heterogeneous tasks. To this end, HML first factorizes a set of similar training tasks into heterogeneous ones and trains the meta model over them at two levels to maximize adaptation and generalization performance respectively. The resultant model can then directly generalize to new tasks. Extensive experiments on few-shot classification and regression problems clearly demonstrate the superiority of HML over fine-tuning and state-of-the-art meta learning approaches in terms of generalization across heterogeneous tasks.
翻译:元数据学习是解决微小学习问题的有希望的解决办法。然而,现有的元学习方法仅限于培训和应用任务具有相同产出结构的情景。为了获得适用于新结构任务的元模型,它需要收集新的培训数据,重复耗时的元培训程序。这使得它们效率低下,甚至无法用于学习解决差异性微小学习任务。因此,我们开发了一种新的、有原则的等级级Meta学习(HML)方法。与仅侧重于优化元模型适应类似任务的现有方法不同,HML还明确优化了它在不同任务中的通用性。为此,HML首先将一套类似的培训任务纳入多样化任务中,并在两个级别上对元模型进行培训,以便分别最大限度地实现适应和概括性业绩。结果模型随后可以直接概括新的任务。对微小的分类和回归问题进行广泛的实验,明确表明HML超过微调和最先进的元学习方法在跨不同任务的一般化方面优势。