In this paper, we investigate a new variant of neural architecture search (NAS) paradigm -- searching with random labels (RLNAS). The task sounds counter-intuitive for most existing NAS algorithms since random label provides few information on the performance of each candidate architecture. Instead, we propose a novel NAS framework based on ease-of-convergence hypothesis, which requires only random labels during searching. The algorithm involves two steps: first, we train a SuperNet using random labels; second, from the SuperNet we extract the sub-network whose weights change most significantly during the training. Extensive experiments are evaluated on multiple datasets (e.g. NAS-Bench-201 and ImageNet) and multiple search spaces (e.g. DARTS-like and MobileNet-like). Very surprisingly, RLNAS achieves comparable or even better results compared with state-of-the-art NAS methods such as PC-DARTS, Single Path One-Shot, even though the counterparts utilize full ground truth labels for searching. We hope our finding could inspire new understandings on the essential of NAS.
翻译:在本文中,我们调查了神经结构搜索(NAS)范式的新变体 -- -- 使用随机标签(RLNAS)搜索。任务对大多数现有的NAS算法来说是反直观的,因为随机标签对每个候选结构的性能几乎没有什么信息。相反,我们提议了一个基于易于调和的假设的新的NAS框架,在搜索过程中只需要随机标签。这一算法涉及两个步骤:第一,我们用随机标签来培训超级网络;第二,我们从超级网络中提取其重量在培训期间变化最大的子网络。对多个数据集(如NAS-Bench-201和图像网络)和多个搜索空间(如DARSS类和移动网络类)进行了广泛的实验评估。非常令人惊讶的是,RLNAS取得了与最先进的NAS方法(如PC-DARTS,单一路径1S)相近或更好的结果。我们希望我们的发现能够激发对NAS基本内容的新理解。