Few-shot learning for neural networks (NNs) is an important problem that aims to train NNs with a few data. The main challenge is how to avoid overfitting since over-parameterized NNs can easily overfit to such small dataset. Previous work (e.g. MAML by Finn et al. 2017) tackles this challenge by meta-learning, which learns how to learn from a few data by using various tasks. On the other hand, one conventional approach to avoid overfitting is restricting hypothesis spaces by endowing sparse NN structures like convolution layers in computer vision. However, although such manually-designed sparse structures are sample-efficient for sufficiently large datasets, they are still insufficient for few-shot learning. Then the following questions naturally arise: (1) Can we find sparse structures effective for few-shot learning by meta-learning? (2) What benefits will it bring in terms of meta-generalization? In this work, we propose a novel meta-learning approach, called Meta-ticket, to find optimal sparse subnetworks for few-shot learning within randomly initialized NNs. We empirically validated that Meta-ticket successfully discover sparse subnetworks that can learn specialized features for each given task. Due to this task-wise adaptation ability, Meta-ticket achieves superior meta-generalization compared to MAML-based methods especially with large NNs. The code is available at: https://github.com/dchiji-ntt/meta-ticket
翻译:神经网络(NNs)少见的学习是一个重要问题,目的是用少量数据对NNS进行培训。主要的挑战是如何避免过度适应,因为过度参数化的NNS可以很容易地过度适应如此小的数据集。先前的工作(例如Finn等人的MAML,2017年)通过元学习来应对这一挑战,学会如何通过使用各种任务从一些数据中学习。另一方面,一种避免过度适应的常规做法是限制假设空间,在计算机视野中将零散的NNNS结构(如革命层)归结为零星。然而,尽管这种手工设计的零星结构对于足够大的数据集来说具有样本效率,但它们仍然不足以满足少发的学习。随后自然出现下列问题:(1) 我们能否通过元学习找到对少发的学习有效的零散结构?(2) 它将带来什么好处? 在这项工作中,我们提议一种新颖的元学习方法,称为Meta-tick,以找到最佳的零星子网络,供随机初始化的NNPS/DO学习。我们的经验性地验证了每个IM-MLS-ladalstal-commastalstal lading latial lax