One-shot neural architecture search (NAS) methods significantly reduce the search cost by considering the whole search space as one network, which only needs to be trained once. However, current methods select each operation independently without considering previous layers. Besides, the historical information obtained with huge computation cost is usually used only once and then discarded. In this paper, we introduce a sampling strategy based on Monte Carlo tree search (MCTS) with the search space modeled as a Monte Carlo tree (MCT), which captures the dependency among layers. Furthermore, intermediate results are stored in the MCT for the future decision and a better exploration-exploitation balance. Concretely, MCT is updated using the training loss as a reward to the architecture performance; for accurately evaluating the numerous nodes, we propose node communication and hierarchical node selection methods in the training and search stages, respectively, which make better uses of the operation rewards and hierarchical information. Moreover, for a fair comparison of different NAS methods, we construct an open-source NAS benchmark of a macro search space evaluated on CIFAR-10, namely NAS-Bench-Macro. Extensive experiments on NAS-Bench-Macro and ImageNet demonstrate that our method significantly improves search efficiency and performance. For example, by only searching $20$ architectures, our obtained architecture achieves $78.0\%$ top-1 accuracy with 442M FLOPs on ImageNet. Code (Benchmark) is available at: \url{https://github.com/xiusu/NAS-Bench-Macro}.
翻译:将整个搜索空间视为一个网络,只需经过一次培训,即可大幅降低搜索成本。然而,目前的方法在不考虑前层的情况下独立选择了每项操作。此外,以巨额计算成本获得的历史信息通常只使用一次,然后被丢弃。在本文中,我们采用了基于蒙特卡洛树搜索(MCTS)的抽样战略,搜索空间以蒙特卡洛树(MCT)为模型,捕捉不同层次之间的依赖性。此外,中间结果储存在MCT,供今后作出决定,并更好地探索-利用平衡。具体地说,MCT利用培训损失来更新,作为对建筑业绩的奖励;为了准确评估众多节点,我们建议在培训和搜索阶段分别使用节点通信和等级节点选择方法,从而更好地利用运行奖励和等级信息。此外,为了公平比较不同的NAS方法,我们为CFAR-10(即NAS-Bench-Macro)所评估的宏观搜索空间建立了一个公开源基准。具体地说,对NAS-Bench-Macro)进行广泛的培训损失实验,作为对建筑业绩的奖励;为了准确性,我们在NAS-Bench-Macros-max-max 的搜索中,我们仅能-max