Neural architecture search (NAS) has become increasingly popular in the deep learning community recently, mainly because it can provide an opportunity to allow interested users without rich expertise to benefit from the success of deep neural networks (DNNs). However, NAS is still laborious and time-consuming because a large number of performance estimations are required during the search process of NAS, and training DNNs is computationally intensive. To solve the major limitation of NAS, improving the efficiency of NAS is essential in the design of NAS. This paper begins with a brief introduction to the general framework of NAS. Then, the methods for evaluating network candidates under the proxy metrics are systematically discussed. This is followed by a description of surrogate-assisted NAS, which is divided into three different categories, namely Bayesian optimization for NAS, surrogate-assisted evolutionary algorithms for NAS, and MOP for NAS. Finally, remaining challenges and open research questions are discussed, and promising research topics are suggested in this emerging field.
翻译:最近,在深层学习界,神经结构搜索(NAS)越来越受欢迎,这主要是因为它可以提供一个机会,使没有丰富专业知识的感兴趣的用户能够从深层神经网络的成功中获益。然而,NAS仍然很费力和费时,因为搜索过程中需要大量性能估计,培训DNN是计算上密集的。为了解决NAS的主要局限性,提高NAS的效率在设计NAS中至关重要。本文件首先简要介绍NAS的总体框架。然后,系统讨论根据代用指标评价网络候选人的方法。随后,对代用指标评估网络候选人的方法进行了系统讨论,然后对代用辅助NAS作了描述,该说明分为三个不同类别,即:NAS的巴伊西亚优化、NAS的代用进化辅助进化算法和NAS的MOP。最后,讨论了其余的挑战和开放研究问题,并提出了这一新兴领域的有希望的研究专题。