Conventional image classifiers are trained by randomly sampling mini-batches of images. To achieve state-of-the-art performance, practitioners use sophisticated data augmentation schemes to expand the amount of training data available for sampling. In contrast, meta-learning algorithms sample support data, query data, and tasks on each training step. In this complex sampling scenario, data augmentation can be used not only to expand the number of images available per class, but also to generate entirely new classes/tasks. We systematically dissect the meta-learning pipeline and investigate the distinct ways in which data augmentation can be integrated at both the image and class levels. Our proposed meta-specific data augmentation significantly improves the performance of meta-learners on few-shot classification benchmarks.
翻译:常规图像分类系统通过随机抽样微型图像库培训常规图像分类系统。为了实现最新业绩,从业人员利用先进的数据增强计划来扩大可供取样使用的培训数据数量。相比之下,元学习算法样本支持数据、查询数据和每个培训步骤的任务。在这种复杂的抽样假设中,数据增强不仅可以用来增加每类现有图像的数量,还可以用来产生全新的类别/任务。我们系统地分解元学习管道,并调查在图像和阶级层面整合数据增强的不同方式。我们提议的元特定数据增强可以大大改善按微小分类基准计算元 Lener的性能。