Despite the empirical success of neural architecture search (NAS) in deep learning applications, the optimality, reproducibility and cost of NAS schemes remain hard to assess. In this paper, we propose Generative Adversarial NAS (GA-NAS) with theoretically provable convergence guarantees, promoting stability and reproducibility in neural architecture search. Inspired by importance sampling, GA-NAS iteratively fits a generator to previously discovered top architectures, thus increasingly focusing on important parts of a large search space. Furthermore, we propose an efficient adversarial learning approach, where the generator is trained by reinforcement learning based on rewards provided by a discriminator, thus being able to explore the search space without evaluating a large number of architectures. Extensive experiments show that GA-NAS beats the best published results under several cases on three public NAS benchmarks. In the meantime, GA-NAS can handle ad-hoc search constraints and search spaces. We show that GA-NAS can be used to improve already optimized baselines found by other NAS methods, including EfficientNet and ProxylessNAS, in terms of ImageNet accuracy or the number of parameters, in their original search space.
翻译:尽管神经结构搜索(NAS)在深层学习应用方面取得了经验性的成功,但NAS计划的最佳性、可复制性和成本仍然难以评估。在本文件中,我们提议以理论上可变的趋同保证、促进神经结构搜索的稳定性和可复制性来创造NAS(GA-NAS),在重要取样的启发下,GA-NAS迭接地使一台发电机适合以前发现的顶层建筑,从而越来越侧重于大型搜索空间的重要部分。此外,我们提议一种高效的对抗性学习方法,在歧视者提供的奖励基础上,通过强化学习对发电机进行培训,从而能够在不评估大量建筑的情况下探索搜索空间。广泛的实验表明,GA-NAS在三个公共NAS基准的若干案例中,优于已公布的最佳结果。与此同时,GA-NAS可以处理特别搜索限制和搜索空间。我们表明,GA-NAS可以使用其他NAS方法,包括高效率的网络和PROCYRSNAS,在原始搜索空间参数的精确性或数量方面,改进已经优化的基线。