Few-shot classification aims to adapt classifiers to novel classes with a few training samples. However, the insufficiency of training data may cause a biased estimation of feature distribution in a certain class. To alleviate this problem, we present a simple yet effective feature rectification method by exploring the category correlation between novel and base classes as the prior knowledge. We explicitly capture such correlation by mapping features into a latent vector with dimension matching the number of base classes, treating it as the logarithm probability of the feature over base classes. Based on this latent vector, the rectified feature is directly constructed by a decoder, which we expect maintaining category-related information while removing other stochastic factors, and consequently being closer to its class centroid. Furthermore, by changing the temperature value in softmax, we can re-balance the feature rectification and reconstruction for better performance. Our method is generic, flexible and agnostic to any feature extractor and classifier, readily to be embedded into existing FSL approaches. Experiments verify that our method is capable of rectifying biased features, especially when the feature is far from the class centroid. The proposed approach consistently obtains considerable performance gains on three widely used benchmarks, evaluated with different backbones and classifiers. The code will be made public.
翻译:少见的分类旨在让分类人员适应具有少数培训样本的新类,然而,培训数据的不足可能导致对某一类的特征分布的偏差估计。为了缓解这一问题,我们提出了一个简单而有效的特征校正方法,探索新类和基础类之间的类别相关性,作为先前的知识。我们通过将特征映射成一个与基类数量相匹配的维度相匹配的潜在矢量,将这种关联性作为基类的特征的对数概率来对待。根据这一潜在矢量,纠正的特征直接由一个解码器构建,我们期望在清除其他随机因素的同时保持与类别有关的信息,从而更接近于其类类的纯度。此外,通过改变软体型中的温度值,我们可以重新平衡特征的校正和重建,以取得更好的性能。我们的方法是通用的、灵活的和不可忽视的,任何特性的提取器和分类器很容易嵌入到现有的FSL方法中。实验证实我们的方法能够纠正偏差特征,特别是当特征远离类类类固度时,我们期望该方法能够保持与分类相关的信息,从而更接近于更接近于其类的类固化。此外,拟议的方法将大量地将基本原理用于对三个不同的标准进行分类。