Data augmentation is an effective technique to improve the generalization of deep neural networks. Recently, AutoAugment proposed a well-designed search space and a search algorithm that automatically finds augmentation policies in a data-driven manner. However, AutoAugment is computationally intensive. In this paper, we propose an efficient gradient-based search algorithm, called Hypernetwork-Based Augmentation (HBA), which simultaneously learns model parameters and augmentation hyperparameters in a single training. Our HBA uses a hypernetwork to approximate a population-based training algorithm, which enables us to tune augmentation hyperparameters by gradient descent. Besides, we introduce a weight sharing strategy that simplifies our hypernetwork architecture and speeds up our search algorithm. We conduct experiments on CIFAR-10, CIFAR-100, SVHN, and ImageNet. Our results show that HBA is competitive to the state-of-the-art methods in terms of both search speed and accuracy.
翻译:增强数据是改善深神经网络普遍化的有效技术。 最近, AutoAngment 提出一个设计完善的搜索空间和搜索算法, 以数据驱动的方式自动找到增强政策。 然而, AutoAugment 是一个计算密集型的。 在本文中, 我们提出一个高效的基于梯度的搜索算法, 叫做超网络增强( HBA ), 它同时在一次培训中学习模型参数和超强参数。 我们的HBA 使用超网络来接近基于人口的培训算法, 从而使我们能够通过梯度下降调节超强参数。 此外, 我们引入了一种重量共享战略, 简化我们的超网络结构并加速我们的搜索算法。 我们在CIFAR- 10、 CIFAR- 100、 SVHN 和图像网络上进行实验。 我们的结果表明, HBA在搜索速度和精确度方面都具有竞争力。