Few-shot learning is currently enjoying a considerable resurgence of interest, aided by the recent advance of deep learning. Contemporary approaches based on weight-generation scheme delivers a straightforward and flexible solution to the problem. However, they did not fully consider both the representation power for unseen categories and weight generation capacity in feature learning, making it a significant performance bottleneck. This paper proposes a multi-level weight-centric feature learning to give full play to feature extractor's dual roles in few-shot learning. Our proposed method consists of two essential techniques: a weight-centric training strategy to improve the features' prototype-ability and a multi-level feature incorporating a mid- and relation-level information. The former increases the feasibility of constructing a discriminative decision boundary based on a few samples. Simultaneously, the latter helps improve the transferability for characterizing novel classes and preserve classification capability for base classes. We extensively evaluate our approach to low-shot classification benchmarks. Experiments demonstrate our proposed method significantly outperforms its counterparts in both standard and generalized settings and using different network backbones.
翻译:由于最近深层次学习的进展,目前略微少见的学习正在引起人们的极大兴趣。基于重力生成办法的当代方法提供了直接和灵活的解决问题的办法。然而,它们并没有充分考虑在特征学习中隐蔽的类别和重力生成能力的代表性力量,使其成为一个显著的绩效瓶颈。本文建议采用多层次重力中心特征学习,以便在微小的学习中充分发挥地物提取者的双重作用。我们提议的方法包括两种基本技术:一种重力核心培训战略,改进特征的原型性,一种包含中层和关联级信息的多层特征。前者增加了在少数样本的基础上构建歧视性决定界限的可行性。与此同时,后者有助于改进新类定性的可转让性,并保持基础类的分类能力。我们广泛评价了我们对于低度分类基准的方法。实验表明,我们提出的方法大大超越了标准环境和普遍环境以及使用不同网络主干线的对应方。