Meta-learning (ML) has emerged as a promising learning method under resource constraints such as few-shot learning. ML approaches typically propose a methodology to learn generalizable models. In this work-in-progress paper, we put the recent ML approaches to a stress test to discover their limitations. Precisely, we measure the performance of ML approaches for few-shot learning against increasing task complexity. Our results show a quick degradation in the performance of initialization strategies for ML (MAML, TAML, and MetaSGD), while surprisingly, approaches that use an optimization strategy (MetaLSTM) perform significantly better. We further demonstrate the effectiveness of an optimization strategy for ML (MetaLSTM++) trained in a MAML manner over a pure optimization strategy. Our experiments also show that the optimization strategies for ML achieve higher transferability from simple to complex tasks.
翻译:在资源制约下,如少见的学习,Met-learning(ML)已成为一种有希望的学习方法。ML方法通常提出一种方法来学习一般模式。在这个进行中的文件里,我们将最近的ML方法置于压力测试中,以发现其局限性。确切地说,我们衡量ML方法的绩效,以便根据任务的复杂性来进行少见的学习。我们的结果表明,ML(MAML、TAML和MetaSGD)的初始化战略(MAL、TAML和MetaSGD)的执行情况迅速退化,但令人惊讶的是,使用优化战略(MetaLSTM)的方法效果显著。我们进一步展示了ML(ML(Metal-LSTM+++)的优化战略相对于纯优化战略的有效性。我们的实验还表明,ML的优化战略能够从简单的任务向复杂的任务实现更高程度的转移。