Data augmentation has been an indispensable tool to improve the performance of deep neural networks, however the augmentation can hardly transfer among different tasks and datasets. Consequently, a recent trend is to adopt AutoML technique to learn proper augmentation policy without extensive hand-crafted tuning. In this paper, we propose an efficient differentiable search algorithm called Direct Differentiable Augmentation Search (DDAS). It exploits meta-learning with one-step gradient update and continuous relaxation to the expected training loss for efficient search. Our DDAS can achieve efficient augmentation search without relying on approximations such as Gumbel Softmax or second order gradient approximation. To further reduce the adverse effect of improper augmentations, we organize the search space into a two level hierarchy, in which we first decide whether to apply augmentation, and then determine the specific augmentation policy. On standard image classification benchmarks, our DDAS achieves state-of-the-art performance and efficiency tradeoff while reducing the search cost dramatically, e.g. 0.15 GPU hours for CIFAR-10. In addition, we also use DDAS to search augmentation for object detection task and achieve comparable performance with AutoAugment, while being 1000x faster.
翻译:增强数据是提高深神经网络性能的一个不可或缺的工具,但增强是无法在不同任务和数据集之间转移的,因此,最近的趋势是采用自动ML技术,学习适当的增强政策,而不进行广泛的手工调整。在本文件中,我们提出一种高效的、可区别的搜索算法,称为直接差异增强搜索(DDAS),利用单步梯度更新和持续放松预期的培训损失,提高搜索效率。我们的DASS可以在不依赖Gumbel Softmax或第二顺序梯度近似等近似值的情况下实现高效的增强搜索。为了进一步减少不适当的增强的不利影响,我们将搜索空间组织成两级,我们首先决定是否应用增强,然后确定具体的增强政策。在标准图像分类基准方面,我们的DASS实现了最新业绩和效率权衡,同时大幅降低搜索成本,例如CIFAR-10的0.15 GPU小时。此外,我们还利用DAS搜索增强能力,以完成与自动加大速度的物体探测任务和类似性能。