Recent advances in meta-learning has led to remarkable performances on several few-shot learning benchmarks. However, such success often ignores the similarity between training and testing tasks, resulting in a potential bias evaluation. We, therefore, propose a generative approach based on a variant of Latent Dirichlet Allocation to analyse task similarity to optimise and better understand the performance of meta-learning. We demonstrate that the proposed method can provide an insightful evaluation for meta-learning algorithms on two few-shot classification benchmarks that matches common intuition: the more similar the higher performance. Based on this similarity measure, we propose a task-selection strategy for meta-learning and show that it can produce more accurate classification results than methods that randomly select training tasks.
翻译:近来的元学习进展导致几个少见的学习基准取得了显著的成绩,然而,这种成功往往忽视了培训和测试任务之间的相似性,从而可能导致偏差评价。因此,我们提议以“Lient Dirichlet 分配”变式为基础,采用一种基因化方法,分析任务相似性,以优化和更好地了解元学习的绩效。我们证明,拟议方法可以对两个与普通直觉相匹配的微小分类基准的元学习算法进行有见地的评价:比较相似的。我们根据这一相似性衡量标准,提议了一个选择任务的战略,用于元学习,并表明它能够产生比随机选择的培训任务更准确的分类结果。