Neural Architecture Search (NAS) has demonstrated state-of-the-art performance on various computer vision tasks. Despite the superior performance achieved, the efficiency and generality of existing methods are highly valued due to their high computational complexity and low generality. In this paper, we propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning, facilitating a theoretical bound on accuracy and efficiency. In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs. With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints, which is practical for on-device models across diverse search spaces and constraints. The architectures searched by our method achieve remarkable top-1 accuracies, 97.56 and 77.2 on CIFAR-10 and ImageNet (mobile settings), respectively, with the fastest search process, i.e., only 1.8 GPU hours on a Tesla V100. Codes for searching and network generation are available at: https://openi.pcl.ac.cn/PCL AutoML/XNAS.
翻译:尽管取得了优异的成绩,但现有方法的效率和一般性因其高计算复杂性和低一般性而备受高度重视。在本文件中,我们建议通过动态分布图谱,建立一个高效和统一的NAS框架,称为DDPNAS,通过动态分布图谱,促进准确性和效率的理论约束。特别是,我们首先从联合绝对分布中进行抽样结构。然后,搜索空间被动态地切割,其分布每几个小区都得到更新。通过拟议的高效网络生成方法,我们直接在特定限制条件下获得了最佳神经结构,这对不同搜索空间和限制的在线设计模型是实用的。我们的方法搜索的建筑分别实现了惊人的顶层1,对CIFAR-10和图像网络(移动环境)分别是97.56和77.2个和77.2个,搜索过程最快,即:Tesla V100只有1.8个GPU小时。搜索和网络生成代码见:https://opi.pcl.c.cn/PCL/Autal/MLX。</s>