Existing automatic data augmentation (DA) methods either ignore updating DA's parameters according to the target model's state during training or adopt update strategies that are not effective enough. In this work, we design a novel data augmentation strategy called "Universal Adaptive Data Augmentation" (UADA). Different from existing methods, UADA would adaptively update DA's parameters according to the target model's gradient information during training: given a pre-defined set of DA operations, we randomly decide types and magnitudes of DA operations for every data batch during training, and adaptively update DA's parameters along the gradient direction of the loss concerning DA's parameters. In this way, UADA can increase the training loss of the target networks, and the target networks would learn features from harder samples to improve the generalization. Moreover, UADA is very general and can be utilized in numerous tasks, e.g., image classification, semantic segmentation and object detection. Extensive experiments with various models are conducted on CIFAR-10, CIFAR-100, ImageNet, tiny-ImageNet, Cityscapes, and VOC07+12 to prove the significant performance improvements brought by our proposed adaptive augmentation.
翻译:现有自动数据增强(DA)方法要么忽视根据目标模型的培训状态更新DA的参数,要么在培训期间忽视根据目标模型的状态更新DA的参数,要么采取不够有效的更新战略。在这项工作中,我们设计了一个名为“通用适应数据增强”的新的数据增强战略(UADA),不同于现有方法,UADA将适应性地根据目标模型的梯度信息更新DA的参数:根据预定的DA行动组别,我们随机地决定培训期间每批数据DA业务的类型和规模,并适应性地更新DA的参数,沿着DA参数损失的梯度方向更新DA的参数。这样,UADADA可以增加目标网络的培训损失,而目标网络将从较强的样本中学习改进一般化的特征。此外,UADADADA非常笼统,可以用于许多任务,例如图像分类、语义分解和对象探测。在CIFAR-10、CIFAR-100、图像Net、小IMageNet、CEScovers、VOC+12,以证明我们提议的适应性要求的显著的升级的改进。