Conventional image classifiers are trained by randomly sampling mini-batches of images. To achieve state-of-the-art performance, sophisticated data augmentation schemes are used to expand the amount of training data available for sampling. In contrast, meta-learning algorithms sample not only images, but classes as well. We investigate how data augmentation can be used not only to expand the number of images available per class, but also to generate entirely new classes. We systematically dissect the meta-learning pipeline and investigate the distinct ways in which data augmentation can be integrated at both the image and class levels. Our proposed meta-specific data augmentation significantly improves the performance of meta-learners on few-shot classification benchmarks.
翻译:常规图像分类系统通过随机抽样微型图像库培训常规图像分类系统。为了实现最先进的性能,利用先进的数据增强计划来扩大可供取样使用的培训数据数量。相比之下,元学习算法不仅对图像进行抽样,而且对类别也进行抽样。我们调查如何不仅利用数据增强来扩大每类现有图像的数量,而且还可以产生全新的类别。我们系统地分解元学习管道,并调查在图像和级别层面整合数据增强的不同方式。我们提议的元特定数据增强极大地改进了按少发分类基准计算元 Lener的性能。