In recent years, many data augmentation techniques have been proposed to increase the diversity of input data and reduce the risk of overfitting on deep neural networks. In this work, we propose an easy-to-implement and model-free data augmentation method called Local Magnification (LOMA). Different from other geometric data augmentation methods that perform global transformations on images, LOMA generates additional training data by randomly magnifying a local area of the image. This local magnification results in geometric changes that significantly broaden the range of augmentations while maintaining the recognizability of objects. Moreover, we extend the idea of LOMA and random cropping to the feature space to augment the feature map, which further boosts the classification accuracy considerably. Experiments show that our proposed LOMA, though straightforward, can be combined with standard data augmentation to significantly improve the performance on image classification and object detection. And further combination with our feature augmentation techniques, termed LOMA_IF&FO, can continue to strengthen the model and outperform advanced intensity transformation methods for data augmentation.
翻译:近年来,提出了许多数据增强技术,以增加输入数据的多样性,并降低深神经网络过度配置的风险。在这项工作中,我们提出了一种简单易行和不使用模型的数据增强方法,称为本地放大法(LOMA ) 。不同于在图像上进行全球转换的其他几何数据增强方法,LOMA通过随机放大图像的局部区域生成了更多的培训数据。这种本地放大导致几何变化,大大扩大了放大范围,同时保持了物体的可识别性。此外,我们还将LOMA和随机裁剪裁的概念扩大到地物空间,以扩大地物空间,从而大大提升了分类的准确性。实验表明,我们提议的LOMA虽然直截了当,但可以与标准数据增强结合起来,大大改进图像分类和物体探测的性能。与我们的特性增强技术(LOMA_IF&FO)进一步结合,可以继续加强模型,并超越数据增强的高级强度转换方法。