Modern convolutional networks such as ResNet and NASNet have achieved state-of-the-art results in many computer vision applications. These architectures consist of stages, which are sets of layers that operate on representations in the same resolution. It has been demonstrated that increasing the number of layers in each stage improves the prediction ability of the network. However, the resulting architecture becomes computationally expensive in terms of floating point operations, memory requirements and inference time. Thus, significant human effort is necessary to evaluate different trade-offs between depth and performance. To handle this problem, recent works have proposed to automatically design high-performance architectures, mainly by means of neural architecture search (NAS). Current NAS strategies analyze a large set of possible candidate architectures and, hence, require vast computational resources and take many GPUs days. Motivated by this, we propose a NAS approach to efficiently design accurate and low-cost convolutional architectures and demonstrate that an efficient strategy for designing these architectures is to learn the depth stage-by-stage. For this purpose, our approach increases depth incrementally in each stage taking into account its importance, such that stages with low importance are kept shallow while stages with high importance become deeper. We conduct experiments on the CIFAR and different versions of ImageNet datasets, where we show that architectures discovered by our approach achieve better accuracy and efficiency than human-designed architectures. Additionally, we show that architectures discovered on CIFAR-10 can be successfully transferred to large datasets. Compared to previous NAS approaches, our method is substantially more efficient, as it evaluates one order of magnitude fewer models and yields architectures on par with the state-of-the-art.
翻译:ResNet 和 NASNet 等现代革命网络已经在许多计算机视觉应用中取得了最新成果。 这些架构由各个阶段组成, 由一组层组成, 在同一分辨率的表达方式上运作。 已经证明, 增加每个阶段的层数可以提高网络的预测能力。 然而, 由此形成的架构在浮动点操作、 记忆要求 和 推论时间 方面, 计算成本昂贵。 因此, 评估深度和性能之间不同取舍需要大量的人力努力。 为了解决这个问题, 最近的工作建议自动设计高性能架构, 主要是通过神经结构搜索(NAS NAS ) 。 目前NAS 战略分析大量可能的候选架构, 从而需要大量的计算资源, 并花费许多 GPUs 日时间。 由此, 我们提出了一种NAS 方法, 高效地设计准确和低成本的革命架构。 设计这些架构的高效战略是学习深度的阶段。 为此, 我们的方法在每一个阶段都提高了深度的深度, 并且我们从一个阶段到更精确的 水平的,, 我们的 以更低的 的 水平的 水平的 以更深层次的 的 的 的 的 的 的 的 结构 显示我们可以显示我们 的 的 以更深层次的 。