Neural architecture search (NAS) and network pruning are widely studied efficient AI techniques, but not yet perfect. NAS performs exhaustive candidate architecture search, incurring tremendous search cost. Though (structured) pruning can simply shrink model dimension, it remains unclear how to decide the per-layer sparsity automatically and optimally. In this work, we revisit the problem of layer-width optimization and propose Pruning-as-Search (PaS), an end-to-end channel pruning method to search out desired sub-network automatically and efficiently. Specifically, we add a depth-wise binary convolution to learn pruning policies directly through gradient descent. By combining the structural reparameterization and PaS, we successfully searched out a new family of VGG-like and lightweight networks, which enable the flexibility of arbitrary width with respect to each layer instead of each stage. Experimental results show that our proposed architecture outperforms prior arts by around $1.0\%$ top-1 accuracy under similar inference speed on ImageNet-1000 classification task. Furthermore, we demonstrate the effectiveness of our width search on complex tasks including instance segmentation and image translation. Code and models are released.
翻译:神经结构搜索(NAS)和网络运行运行是广泛研究高效的AI技术,但尚不完美。NAS进行详尽的候选结构搜索,花费巨大的搜索成本。尽管(结构化)运行可以简单地缩小模型尺寸,但仍不清楚如何自动和最佳地决定每层的宽度。在这项工作中,我们重新研究层宽度优化问题,并提议Prinning-as-Search(PAS),这是一条端到端的频道运行方法,可以自动和高效地搜索所需的子网络。具体地说,我们添加了一种深智二进制的二进制,直接通过梯度下降学习运行政策。通过结构重计和PAS,我们成功地搜索了像VGG和轻量网络这样的新系列,从而使得每个层的任意宽度具有灵活性,而不是每个阶段。实验结果显示,我们提议的结构在图像网络-1000分类的类似推论速度下,以约1.0美元一流精度比前艺术。此外,我们展示了我们对于复杂任务的宽度搜索的有效性,包括例分段和图像转换的代码和转换模型。