Model Agnostic Meta-Learning (MAML) is one of the most representative of gradient-based meta-learning algorithms. MAML learns new tasks with a few data samples using inner updates from a meta-initialization point and learns the meta-initialization parameters with outer updates. It has recently been hypothesized that representation reuse, which makes little change in efficient representations, is the dominant factor in the performance of the meta-initialized model through MAML in contrast to representation change, which causes a significant change in representations. In this study, we investigate the necessity of representation change for the ultimate goal of few-shot learning, which is solving domain-agnostic tasks. To this aim, we propose a novel meta-learning algorithm, called BOIL (Body Only update in Inner Loop), which updates only the body (extractor) of the model and freezes the head (classifier) during inner loop updates. BOIL leverages representation change rather than representation reuse. This is because feature vectors (representations) have to move quickly to their corresponding frozen head vectors. We visualize this property using cosine similarity, CKA, and empirical results without the head. BOIL empirically shows significant performance improvement over MAML, particularly on cross-domain tasks. The results imply that representation change in gradient-based meta-learning approaches is a critical component.
翻译:模型Agnistic Meta- Learning (MAML) 是基于梯度的元学习算法中最有代表性的模型之一。 MAML 使用一个元初始化点的内部更新来学习一些数据样本来学习新任务, 并学习外部更新的元初始化参数。 最近有人假设, 代表再利用( 使高效的表示方式变化不大) 是MAML 的元初始模型表现的主导因素, 而不是代表方式变化导致代表方式发生重大变化。 在本研究中, 我们调查了代表方式变化的必要性, 最终目标是进行一些短片学习, 也就是解决域不可知性任务。 为此, 我们提出了一种新的元化学习算法, 叫做 BOIL( 仅对内层更新), 仅更新模型的机构( 吸引者), 并在内部循环更新时冻结头部( 分类) 。 BOIL 将代表方式变化而不是代表方式再利用。 因为特性矢量( 代表方式) 必须迅速移动到相应的冷藏头矢量。 我们用共同结构、 CKKKKKA和实验结果显示一个显著的升级结果。