Despite the remarkable successes of Convolutional Neural Networks (CNNs) in computer vision, it is time-consuming and error-prone to manually design a CNN. Among various Neural Architecture Search (NAS) methods that are motivated to automate designs of high-performance CNNs, the differentiable NAS and population-based NAS are attracting increasing interests due to their unique characters. To benefit from the merits while overcoming the deficiencies of both, this work proposes a novel NAS method, RelativeNAS. As the key to efficient search, RelativeNAS performs joint learning between fast-learners (i.e. networks with relatively higher accuracy) and slow-learners in a pairwise manner. Moreover, since RelativeNAS only requires low-fidelity performance estimation to distinguish each pair of fast-learner and slow-learner, it saves certain computation costs for training the candidate architectures. The proposed RelativeNAS brings several unique advantages: (1) it achieves state-of-the-art performance on ImageNet with top-1 error rate of 24.88%, i.e. outperforming DARTS and AmoebaNet-B by 1.82% and 1.12% respectively; (2) it spends only nine hours with a single 1080Ti GPU to obtain the discovered cells, i.e. 3.75x and 7875x faster than DARTS and AmoebaNet respectively; (3) it provides that the discovered cells obtained on CIFAR-10 can be directly transferred to object detection, semantic segmentation, and keypoint detection, yielding competitive results of 73.1% mAP on PASCAL VOC, 78.7% mIoU on Cityscapes, and 68.5% AP on MSCOCO, respectively. The implementation of RelativeNAS is available at https://github.com/EMI-Group/RelativeNAS
翻译:尽管在计算机视觉方面,革命神经网络(CNNs)取得了显著的成功,但是手动设计CNN却需要时间和容易出错。在促使高性能CNN设计自动化的各种神经结构搜索方法中,不同的NAS和以人口为基础的NAS因其独特性能而吸引了越来越多的兴趣。为了在克服两者的缺陷的同时从优点中获益,这项工作提出了一个新的NAS方法,相对NAS。作为高效搜索的关键,相对NAS在快速利器(即精度相对较高的网络)和直接利器之间进行联合学习。此外,由于相对NAS只需要低性能评估来区分每对快利纳和慢利器的特性,因此它节省了培训候选结构的某些计算成本。 拟议的相对NAS带来了一些独特的优势:(1) 它在图像网络上实现了最先进的速度,即高的CO88%,即相对精确性能超过DARTS的数值, 和亚马萨克斯的10-OVALS 的运行时间段。