Neural Architecture Search (NAS) is widely used to automatically design the neural network with the best performance among a large number of candidate architectures. To reduce the search time, zero-shot NAS aims at designing training-free proxies that can predict the test performance of a given architecture. However, as shown recently, none of the zero-shot proxies proposed to date can actually work consistently better than a naive proxy, namely, the number of network parameters (#Params). To improve this state of affairs, as the main theoretical contribution, we first reveal how some specific gradient properties across different samples impact the convergence rate and generalization capacity of neural networks. Based on this theoretical analysis, we propose a new zero-shot proxy, ZiCo, the first proxy that works consistently better than #Params. We demonstrate that ZiCo works better than State-Of-The-Art (SOTA) proxies on several popular NAS-Benchmarks (NASBench101, NATSBench-SSS/TSS, TransNASBench-101) for multiple applications (e.g., image classification/reconstruction and pixel-level prediction). Finally, we demonstrate that the optimal architectures found via ZiCo are as competitive as the ones found by one-shot and multi-shot NAS methods, but with much less search time. For example, ZiCo-based NAS can find optimal architectures with 78.1%, 79.4%, and 80.4% test accuracy under inference budgets of 450M, 600M, and 1000M FLOPs on ImageNet within 0.4 GPU days.
翻译:神经结构搜索(NAS) 被广泛用于自动设计神经网络(NAS) 。 为了减少搜索时间, 零点NAS 旨在设计无培训的代理器, 可以预测特定建筑的测试性能。 但是, 如最近显示的那样, 迄今提出的零点代理器实际上没有一个比天真的替代物(#Params) 更能持续工作, 即网络参数的数量。 为改善这种状况, 作为主要理论贡献, 我们首先揭示了不同样本中某些特定的梯度特性如何影响神经网络的趋同率和总体化能力。 根据这一理论分析, 我们提出了一个新的零点代理器, 可以预测某个建筑的测试性能始终比#Params要好。 我们证明, Zico在几个流行的NAS- Benchmarks(NASBES 101, NATS-SS-TS, TransNASS-101) 和 TransNAS-S-CS-COVAL 应用(eal Silvaration/resublegal Supal)中发现一个最佳的图像/Real-CUDSilation/ real- sal- asimal- asimal- sal- asimpal- yal- laveal- lacubal- 和多时间方法, 通过一个最佳的图像/ real- sal- sal- sal- sal- sal- y- sal- sal- labisal- sal- sal- sal- sal-sal- ycal- sal- sal- ycal- sal-s-s), 和多级的图像/s-s- sal-s-s-s- sal- sal-sal-sal-sal- sal- sal- sal- ex-sal- sal-sal- sal-sal- 和sal- sal-sal-s- 和s-s-sal- sal- sal- sal- sal- sal- sal- sal- sal- sal- sal- sal- sal- sal-s-s-s-s- 和