This paper proposes a novel differentiable architecture search method by formulating it into a distribution learning problem. We treat the continuously relaxed architecture mixing weight as random variables, modeled by Dirichlet distribution. With recently developed pathwise derivatives, the Dirichlet parameters can be easily optimized with gradient-based optimizer in an end-to-end manner. This formulation improves the generalization ability and induces stochasticity that naturally encourages exploration in the search space. Furthermore, to alleviate the large memory consumption of differentiable NAS, we propose a simple yet effective progressive learning scheme that enables searching directly on large-scale tasks, eliminating the gap between search and evaluation phases. Extensive experiments demonstrate the effectiveness of our method. Specifically, we obtain a test error of 2.46% for CIFAR-10, 23.7% for ImageNet under the mobile setting. On NAS-Bench-201, we also achieve state-of-the-art results on all three datasets and provide insights for the effective design of neural architecture search algorithms.
翻译:本文提出一种新的差异建筑搜索方法, 将其发展成分布式学习问题。 我们将持续放松的建筑结构混合重量作为随机变量, 由 Dirichlet 分布制做模型。 借助最近开发的路径性衍生物, Dirichlet 参数可以很容易地以基于梯度的优化器在终端到终端的方式优化。 这个配置可以提高通用能力, 并自然地产生可鼓励在搜索空间进行探索的随机性。 此外, 为了减轻可区分的NAS 的大量记忆消耗, 我们提出了一个简单而有效的渐进学习计划, 能够直接搜索大型任务, 消除搜索和评估阶段之间的差距。 广泛的实验证明了我们的方法的有效性。 具体地说, 在移动环境下, CIFAR- 10, 23.7% 图像网络的测试错误为2.46% 。 在 NAS- Bench-201 上, 我们还在所有三个数据集上取得了最新的结果, 并为神经结构搜索算法的有效设计提供洞察力。