Neural architecture search (NAS) has brought significant progress in recent image recognition tasks. Most existing NAS methods apply restricted search spaces, which limits the upper-bound performance of searched models. To address this issue, we propose a new search space named MobileNet3-MT. By reducing human-prior knowledge in omni dimensions of networks, MobileNet3-MT accommodates more potential candidates. For searching in this challenging search space, we present an efficient Multi-trial Evolution-based NAS method termed MENAS. Specifically, we accelerate the evolutionary search process by gradually pruning models in the population. Each model is trained with an early stop and replaced by its Lottery Tickets (the explored optimal pruned network).In this way, the full training pipeline of cumbersome networks is prevented and more efficient networks are automatically generated. Extensive experimental results on ImageNet-1K, CIFAR-10, and CIFAR-100 demonstrate that MENAS achieves state-of-the-art performance.
翻译:最近的图像识别任务(NAS)取得了显著进展。 大部分现有的NAS方法都采用了限制搜索空间,限制了搜索模型的上限性能。 为了解决这个问题,我们提议了一个新的搜索空间,名为 MobileNet3-MT。 通过减少网络全方位的人-主要知识, MobileNet3-MT能够容纳更多的潜在候选人。为了在这个具有挑战性的搜索空间中搜索,我们展示了高效的多阶段进化型NAS方法,即MENAS。 具体地说,我们通过在人群中逐步运行模型来加快进化搜索进程。 每个模型都经过早期拦截,取而代之以其“彩票”(探索的最佳操纵网络 ) 。 这样,繁琐网络的全面培训管道就会被阻止,并自动生成更高效的网络。 在图像网络-1K、CIFAR-10和CIFAR-100上的广泛实验结果显示, MENAS取得了最新业绩。