While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that the number of instances per task and class is fixed. Due to such restriction, they learn to equally utilize the meta-knowledge across all the tasks, even when the number of instances per task and class largely varies. Moreover, they do not consider distributional difference in unseen tasks, on which the meta-knowledge may have less usefulness depending on the task relatedness. To overcome these limitations, we propose a novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning within each task. Through the learning of the balancing variables, we can decide whether to obtain a solution by relying on the meta-knowledge or task-specific learning. We formulate this objective into a Bayesian inference framework and tackle it using variational inference. We validate our Bayesian Task-Adaptive Meta-Learning (Bayesian TAML) on multiple realistic task- and class-imbalanced datasets, on which it significantly outperforms existing meta-learning approaches. Further ablation study confirms the effectiveness of each balancing component and the Bayesian learning framework.
翻译:虽然任务可能随现实环境中的事例和类别数量而不同,但现有的微小分类的元学习方法假定每个任务和类别的情况是固定的。由于这种限制,它们学会在所有任务中平等使用元知识,即使每个任务和类别的情况大不相同。此外,它们不考虑无形任务中的分配差异,而元知识根据任务相关程度可能不太有用。为了克服这些限制,我们提议了一个新的元学习模式,以适应性平衡每个任务中的元学习和具体任务学习的影响。通过学习平衡变量,我们可以决定是否通过依赖元知识或具体任务学习来获得解决办法。我们将这一目标发展成一个贝叶的推论框架,并利用变异的推理来解决这个问题。我们在多种现实的任务和分类平衡数据集(Bayesian TAML)上验证了我们巴耶西亚任务-适应性元学习(Bayesian TAML)上的任务和分类平衡数据集(Bayes)的实效,从而大大超越了现有的元学习方法。