Neural Architecture Search (NAS) has attracted growing interest. To reduce the search cost, recent work has explored weight sharing across models and made major progress in One-Shot NAS. However, it has been observed that a model with higher one-shot model accuracy does not necessarily perform better when stand-alone trained. To address this issue, in this paper, we propose Progressive Automatic Design of search space, named PAD-NAS. Unlike previous approaches where the same operation search space is shared by all the layers in the supernet, we formulate a progressive search strategy based on operation pruning and build a layer-wise operation search space. In this way, PAD-NAS can automatically design the operations for each layer and achieve a trade-off between search space quality and model diversity. During the search, we also take the hardware platform constraints into consideration for efficient neural network model deployment. Extensive experiments on ImageNet show that our method can achieve state-of-the-art performance.
翻译:神经结构搜索(NAS)吸引了越来越多的兴趣。为了降低搜索成本,最近的工作探索了不同模型之间的权重共享,并在“单层”NAS中取得了重大进步。然而,人们发现,在独立培训时,一发型精确度较高的模型不一定效果更好。为了解决这一问题,我们在本文件中提出了名为PAD-NAS的搜索空间的渐进自动设计。与以往的做法不同,以往的操作搜索空间由超级网络的所有层共享,我们根据操作的快速运行和构建一个分层操作搜索空间制定了渐进搜索战略。这样,PAD-NAS可以自动设计每个层的操作,并在搜索空间质量和模型多样性之间实现平衡。在搜索过程中,我们还考虑到硬件平台的限制,以高效的神经网络模型部署。在图像网络上进行的广泛实验表明,我们的方法可以达到最先进的性能。