Recently, it has been observed that a transfer learning solution might be all we need to solve many few-shot learning benchmarks -- thus raising important questions about when and how meta-learning algorithms should be deployed. In this paper, we seek to clarify these questions by proposing a novel metric -- the diversity coefficient -- to measure the diversity of tasks in a few-shot learning benchmark. We hypothesize that the diversity coefficient of the few-shot learning benchmark is predictive of whether meta-learning solutions will succeed or not. Using the diversity coefficient, we show that the MiniImagenet benchmark has zero diversity. This novel insight contextualizes claims that transfer learning solutions are better than meta-learned solutions. Specifically, we empirically find that a diversity coefficient of zero correlates with a high similarity between transfer learning and Model-Agnostic Meta-Learning (MAML) learned solutions in terms of meta-accuracy (at meta-test time). Therefore, we conjecture meta-learned solutions have the same meta-test performance as transfer learning when the diversity coefficient is zero. Our work provides the first test of whether diversity correlates with meta-learning success.
翻译:最近,人们注意到,转移学习解决方案可能是我们解决许多少见学习基准所需要的所有方法,从而对何时和如何运用元学习算法提出重要问题。在本文件中,我们力求通过提出一种新的衡量标准 -- -- 多样性系数 -- -- 来澄清这些问题,以在几个短小学习基准中衡量任务的多样性。我们假设,微小学习基准的多样性系数是预测元学习解决方案是否成功。我们使用多样性系数表明,微型网基准是零多样性的。这种新颖的洞察力将转让学习解决方案比元学习解决方案好的说法从上下文入手。具体地说,我们从经验上发现,一个零多样性系数与转移学习和模型 -- -- 综合数学 -- -- 元学习(MAML)之间高度相似,在(元测试时间)学得的解决方案是元-准确性。因此,我们推测,在多样性系数为零时,元学习的解决方案具有与转移学习学习相同的元数。我们的工作首次测试了多样性是否与元学习成功相关。