Fine-grained few-shot recognition often suffers from the problem of training data scarcity for novel categories.The network tends to overfit and does not generalize well to unseen classes due to insufficient training data. Many methods have been proposed to synthesize additional data to support the training. In this paper, we focus one enlarging the intra-class variance of the unseen class to improve few-shot classification performance. We assume that the distribution of intra-class variance generalizes across the base class and the novel class. Thus, the intra-class variance of the base set can be transferred to the novel set for feature augmentation. Specifically, we first model the distribution of intra-class variance on the base set via variational inference. Then the learned distribution is transferred to the novel set to generate additional features, which are used together with the original ones to train a classifier. Experimental results show a significant boost over the state-of-the-art methods on the challenging fine-grained few-shot image classification benchmarks.
翻译:由于培训数据不足,网络往往过于完善,而且不善于推广到看不见的班级。提出了许多方法来综合补充数据以支持培训。在本文中,我们集中一种方法来扩大无形班级内部的差异,以提高少见分类的性能。我们假定,各类内部差异的分布在基础班和新类班之间是通用的。因此,基本组的类内差异可以转移到功能增强的新创集中。具体地说,我们首先通过变式推论来模拟基础组内部差异的分布。然后,将学到的分布转移到小组,以产生更多的特征,这些特征与原始分类师一起用于培训一个分类师。实验结果显示,在具有挑战性的微小微小图像分类基准方面,最先进的方法有很大的提升。