Designing neural architectures requires immense manual efforts. This has promoted the development of neural architecture search (NAS) to automate the design. While previous NAS methods achieve promising results but run slowly, zero-cost proxies run extremely fast but are less promising. Therefore, it is of great potential to accelerate NAS via those zero-cost proxies. The existing method has two limitations, which are unforeseeable reliability and one-shot usage. To address the limitations, we present ProxyBO, an efficient Bayesian optimization (BO) framework that utilizes the zero-cost proxies to accelerate neural architecture search. We apply the generalization ability measurement to estimate the fitness of proxies on the task during each iteration and design a novel acquisition function to combine BO with zero-cost proxies based on their dynamic influence. Extensive empirical studies show that ProxyBO consistently outperforms competitive baselines on five tasks from three public benchmarks. Concretely, ProxyBO achieves up to 5.41x and 3.86x speedups over the state-of-the-art approaches REA and BRP-NAS.
翻译:设计神经结构需要巨大的人工努力。 这促进了神经结构搜索(NAS)的开发,使设计自动化。 虽然先前的NAS方法取得了有希望的结果,但运行缓慢,但零成本代理运行速度极快,但前景不太乐观。 因此,它极有可能通过零成本代理加速NAS。 现有方法有两个局限性, 它们是无法预见的可靠性和一箭见血的使用。 为解决这些局限性, 我们提出了ProxyBO, 一个高效的Bayesian优化框架, 利用零成本代理加快神经结构搜索。 我们应用通用能力测量来估计每次循环期间代理商在任务上的适合性, 并设计一个新的购置功能, 以其动态影响为基础, 将BO与零成本代理结合起来。 广泛的实证研究表明, ProxyBO在三项公共基准的五项任务上始终超越竞争基线。 具体地说, ProxisybO实现了5.41x和3.86x速度, 以加速国家工艺方法REA和BRP-NAS。