To achieve excellent performance with modern neural networks, having the right network architecture is important. Neural Architecture Search (NAS) concerns the automatic discovery of task-specific network architectures. Modern NAS approaches leverage supernetworks whose subnetworks encode candidate neural network architectures. These subnetworks can be trained simultaneously, removing the need to train each network from scratch, thereby increasing the efficiency of NAS. A recent method called Neural Architecture Transfer (NAT) further improves the efficiency of NAS for computer vision tasks by using a multi-objective evolutionary algorithm to find high-quality subnetworks of a supernetwork pretrained on ImageNet. Building upon NAT, we introduce ENCAS - Evolutionary Neural Cascade Search. ENCAS can be used to search over multiple pretrained supernetworks to achieve a trade-off front of cascades of different neural network architectures, maximizing accuracy while minimizing FLOPs count. We test ENCAS on common computer vision benchmarks (CIFAR-10, CIFAR-100, ImageNet) and achieve Pareto dominance over previous state-of-the-art NAS models up to 1.5 GFLOPs. Additionally, applying ENCAS to a pool of 518 publicly available ImageNet classifiers leads to Pareto dominance in all computation regimes and to increasing the maximum accuracy from 88.6% to 89.0%, accompanied by an 18\% decrease in computation effort from 362 to 296 GFLOPs. Our code is available at https://github.com/AwesomeLemon/ENCAS
翻译:为了在现代神经网络中取得极佳的绩效,拥有正确的网络架构十分重要。神经架构搜索(NAS)涉及自动发现特定任务网络架构。现代NAS 方法利用了其子网络编码候选神经网络架构的超级网络。这些子网络可以同时接受培训,从零开始就不需要对每个网络进行培训,从而提高NAS的效率。最近采用的一种名为神经架构传输(NAT)的方法,通过使用多目标进化算法,找到在图像网络上预先训练的超级网络的高质量子网络。在NAT上建立,我们引入ENCAS - 进化性神经轨迹搜索系统。ENCAS可以使用经过预先培训的多个超级网络进行搜索,以实现不同神经网络架构的连锁贸易前端,从而提高准确性,同时尽量减少LOPs的数量。我们用通用的计算机愿景基准测试ENCAS(CIFAR-10,CIFAR-100,图像网络网),并且从以前的NAS-S-进化的高级网络模型到1.5 GFLOPs 的精确度,从我们所有的ASASA18级系统到现有的ASAA 和最高级系统。