Few-shot classification addresses the challenge of classifying examples given only limited labeled data. A powerful approach is to go beyond data augmentation, towards data synthesis. However, most of data augmentation/synthesis methods for few-shot classification are overly complex and sophisticated, e.g. training a wGAN with multiple regularizers or training a network to transfer latent diversities from known to novel classes. We make two contributions, namely we show that: (1) using a simple loss function is more than enough for training a feature generator in the few-shot setting; and (2) learning to generate tensor features instead of vector features is superior. Extensive experiments on miniImagenet, CUB and CIFAR-FS datasets show that our method sets a new state of the art, outperforming more sophisticated few-shot data augmentation methods.
翻译:少见的分类解决了对仅提供有限标签数据的例子进行分类的挑战。一个强有力的方法是超越数据增强,而要综合数据。然而,多数数据增强/合成方法对于几发分类过于复杂和复杂,例如,培训一个拥有多个监管者的WGAN或培训一个网络,将已知的潜在多样性从新类别转移给新类别。我们做出两项贡献,即:(1) 使用简单的损失功能足以在短发环境中培训一个特性生成器;(2) 学习生成发光特性,而不是矢量特性,是优越的。关于微型IMagenet、CUB和CIFAR-FS数据集的广泛实验表明,我们的方法提出了新的艺术状态,比更精密的少发光数据增强方法表现得更好。