Neural Architecture Search (NAS) is a collection of methods to craft the way neural networks are built. Current NAS methods are far from ab initio and automatic, as they use manual backbone architectures or micro building blocks (cells), which have had minor breakthroughs in performance compared to random baselines. They also involve a significant manual expert effort in various components of the NAS pipeline. This raises a natural question - Are the current NAS methods still heavily dependent on manual effort in the search space design and wiring like it was done when building models before the advent of NAS? In this paper, instead of merely chasing slight improvements over state-of-the-art (SOTA) performance, we revisit the fundamental approach to NAS and propose a novel approach called ReNAS that can search for the complete neural network without much human effort and is a step closer towards AutoML-nirvana. Our method starts from a complete graph mapped to a neural network and searches for the connections and operations by balancing the exploration and exploitation of the search space. The results are on-par with the SOTA performance with methods that leverage handcrafted blocks. We believe that this approach may lead to newer NAS strategies for a variety of network types.
翻译:神经结构搜索(NAS)是设计神经网络构建方式的方法的集合。 当前的NAS方法远非初始和自动方法,因为它们使用人工主干结构或微型构件(细胞),与随机基线相比,在性能方面稍有突破;它们还涉及NAS管道各组成部分的大规模人工专家工作。 这就提出了一个自然问题 — — 目前NAS方法是否仍然严重依赖于人工空间搜索设计以及像NAS出现之前在建造模型时那样的电线连接? 在本文中,我们没有仅仅在艺术状态(SOTA)性能上追求微小的改进,而是重新审视了对NAS的基本方法,提出了称为RNAS的新方法,该方法可以在人类不做大量努力的情况下搜索完整的神经网络,而且接近AutML- Nirvana。 我们的方法是从一个完整的图绘制成神经网络,通过平衡搜索空间的探索和利用来搜索连接和操作。 其结果与SOTATA的高级性能与各种工具网络的杠杆化方法相匹配。 我们相信, NAS的这一方法可能从一个完整的图表开始。