Image augmentations applied during training are crucial for the generalization performance of image classifiers. Therefore, a large body of research has focused on finding the optimal augmentation policy for a given task. Yet, RandAugment [2], a simple random augmentation policy, has recently been shown to outperform existing sophisticated policies. Only Adversarial AutoAugment (AdvAA) [11], an approach based on the idea of adversarial training, has shown to be better than RandAugment. In this paper, we show that random augmentations are still competitive compared to an optimal adversarial approach, as well as to simple curricula, and conjecture that the success of AdvAA is due to the stochasticity of the policy controller network, which introduces a mild form of curriculum.
翻译:培训期间应用的图像增强对图像分类员的通用性表现至关重要,因此,大量研究侧重于为某项任务寻找最佳的增强政策。然而,最近显示,简易随机增强政策RandAugment[2],即简单的随机增强政策,优于现有的复杂政策。只有基于对抗性培训理念的反向自动增强(AdvA)[11] 方法,才证明优于反向培训。在本文中,我们表明随机增强与最佳对抗性方法相比,以及简单的课程仍然具有竞争力,并推测AdvA的成功是由于政策控制网络的随机性,这种网络引入了温和的课程形式。