Current neural architecture search (NAS) algorithms still require expert knowledge and effort to design a search space for network construction. In this paper, we consider automating the search space design to minimize human interference, which however faces two challenges: the explosive complexity of the exploration space and the expensive computation cost to evaluate the quality of different search spaces. To solve them, we propose a novel differentiable evolutionary framework named AutoSpace, which evolves the search space to an optimal one with following novel techniques: a differentiable fitness scoring function to efficiently evaluate the performance of cells and a reference architecture to speedup the evolution procedure and avoid falling into sub-optimal solutions. The framework is generic and compatible with additional computational constraints, making it feasible to learn specialized search spaces that fit different computational budgets. With the learned search space, the performance of recent NAS algorithms can be improved significantly compared with using previously manually designed spaces. Remarkably, the models generated from the new search space achieve 77.8% top-1 accuracy on ImageNet under the mobile setting (MAdds < 500M), out-performing previous SOTA EfficientNet-B0 by 0.7%. All codes will be made public.
翻译:目前神经结构搜索(NAS)算法仍需要专家知识和努力设计网络建设搜索空间。 在本文中,我们考虑将搜索空间设计自动化,以尽量减少人类干扰,但面临两个挑战:探索空间的爆炸复杂性以及评估不同搜索空间质量的昂贵计算成本。为了解决这些问题,我们提议了一个名为“AutoSpace”的新型不同演化框架,它将搜索空间演变为最佳框架,采用以下新技术:一个不同的健身评分功能,以有效评价单元格的性能,以及一个参考结构,以加快进化程序,避免落入亚最佳解决方案。这个框架是通用的,与额外的计算限制相容,使学习适合不同计算预算的专门搜索空间成为可行。随着学习到的搜索空间,最近的NAS算法的性能可以与以前手工设计的空间相比大大改进。值得注意的是,新搜索空间生成的模型在移动设置(Mads < 500M)下实现了77.8%的最高-1的准确度,比前SOTA节节率网络-B0高出0.7%。所有代码都将公开。