Neural architecture search (NAS) has become increasingly popular in the deep learning community recently, mainly because it can provide an opportunity to allow interested users without rich expertise to benefit from the success of deep neural networks (DNNs). However, NAS is still laborious and time-consuming because a large number of performance estimations are required during the search process of NAS, and training DNNs is computationally intensive. To solve this major limitation of NAS, improving the computational efficiency is essential in the design of NAS. However, a systematic overview of computationally efficient NAS (CE-NAS) methods still lacks. To fill this gap, we provide a comprehensive survey of the state-of-the-art on CE-NAS by categorizing the existing work into proxy-based and surrogate-assisted NAS methods, together with a thorough discussion of their design principles and a quantitative comparison of their performances and computational complexities. The remaining challenges and open research questions are also discussed, and promising research topics in this emerging field are suggested.
翻译:最近,在深层学习界,神经结构搜索越来越受欢迎,这主要是因为它可以提供机会,使没有丰富专业知识的感兴趣的用户能够从深层神经网络的成功中获益。然而,由于在搜索过程中需要大量性能估算,而且培训DNN在计算上十分密集,因此,NAS仍然费力和费时。为了解决NAS的这一重大局限性,提高计算效率对NAS的设计至关重要。然而,仍然缺乏对计算效率高的NAS(CE-NAS)方法的系统概览。为填补这一空白,我们通过将现有工作分为代用和代用辅助NAS方法,以及透彻讨论其设计原则和对其绩效和计算复杂性的定量比较,对CE-NAS最新技术进行了全面调查,并提出了这一新兴领域的有希望的研究课题。