One-shot fine-grained visual recognition often suffers from the problem of having few training examples for new fine-grained classes. To alleviate this problem, off-the-shelf image generation techniques based on Generative Adversarial Networks (GANs) can potentially create additional training images. However, these GAN-generated images are often not helpful for actually improving the accuracy of one-shot fine-grained recognition. In this paper, we propose a meta-learning framework to combine generated images with original images, so that the resulting "hybrid" training images improve one-shot learning. Specifically, the generic image generator is updated by a few training instances of novel classes, and a Meta Image Reinforcing Network (MetaIRNet) is proposed to conduct one-shot fine-grained recognition as well as image reinforcement. Our experiments demonstrate consistent improvement over baselines on one-shot fine-grained image classification benchmarks. Furthermore, our analysis shows that the reinforced images have more diversity compared to the original and GAN-generated images.
翻译:光速微小的视觉识别往往受到以下问题的影响:为新的微粒类课程提供很少的培训实例。为了缓解这一问题,基于创用反转网络(GANs)的现成图像生成技术有可能创造更多的培训图像。然而,这些GAN生成的图像往往无助于实际提高一发微细图像识别的准确性。在本文中,我们提出了一个元学习框架,将生成的图像与原始图像合并,从而使由此产生的“杂交”培训图像改进一拍学习。具体地说,通用图像生成器通过几个新课程的培训案例更新,并提议一个元图像强化网络(MetaIRNet)进行一发微小的微分辨识和图像强化。我们的实验显示,在一发微细微放大图像分类基准基线方面,我们不断改进。此外,我们的分析显示,强化图像与原始图像和GAN生成的图像相比,具有更大的多样性。