The automation of neural architecture design has been a coveted alternative to human experts. Recent works have small search space, which is easier to optimize but has a limited upper bound of the optimal solution. Extra human design is needed for those methods to propose a more suitable space with respect to the specific task and algorithm capacity. To further enhance the degree of automation for neural architecture search, we present a Neural Search-space Evolution (NSE) scheme that iteratively amplifies the results from the previous effort by maintaining an optimized search space subset. This design minimizes the necessity of a well-designed search space. We further extend the flexibility of obtainable architectures by introducing a learnable multi-branch setting. By employing the proposed method, a consistent performance gain is achieved during a progressive search over upcoming search spaces. We achieve 77.3% top-1 retrain accuracy on ImageNet with 333M FLOPs, which yielded a state-of-the-art performance among previous auto-generated architectures that do not involve knowledge distillation or weight pruning. When the latency constraint is adopted, our result also performs better than the previous best-performing mobile models with a 77.9% Top-1 retrain accuracy.
翻译:神经结构的自动化设计是人类专家的一种令人羡慕的替代物。 最近的工程有小搜索空间, 较容易优化, 但最佳解决方案的上限有限。 这些方法需要额外的人类设计, 以提出更适合特定任务和算法能力的空间。 为了进一步提高神经结构搜索自动化的程度, 我们提出了一个神经搜索空间进化( NSE) 计划, 通过保持优化搜索空间子集, 迭代地放大先前工作的结果。 这个设计最大限度地减少了设计良好的搜索空间的必要性。 我们通过引入可学习的多管设置, 进一步扩大了可获取建筑的灵活性。 通过采用拟议方法, 在逐步搜索跨过搜索空间时, 取得了一致的性能收益。 我们实现了图像网络上77.3% 的顶层-1 的再培训精度, 333M FLOPs 的图像网络在以前的自动生成的架构中产生了最先进的性能, 它并不涉及知识蒸馏或重量再处理。 当采用拉特限制时, 我们的结果也比先前最佳的移动模型( 77- 9 % ) 。