Neural architecture search (NAS) proves to be among the best approaches for many tasks by generating an application-adaptive neural architecture, which is still challenged by high computational cost and memory consumption. At the same time, 1-bit convolutional neural networks (CNNs) with binarized weights and activations show their potential for resource-limited embedded devices. One natural approach is to use 1-bit CNNs to reduce the computation and memory cost of NAS by taking advantage of the strengths of each in a unified framework. To this end, a Child-Parent (CP) model is introduced to a differentiable NAS to search the binarized architecture (Child) under the supervision of a full-precision model (Parent). In the search stage, the Child-Parent model uses an indicator generated by the child and parent model accuracy to evaluate the performance and abandon operations with less potential. In the training stage, a kernel-level CP loss is introduced to optimize the binarized network. Extensive experiments demonstrate that the proposed CP-NAS achieves a comparable accuracy with traditional NAS on both the CIFAR and ImageNet databases. It achieves the accuracy of $95.27\%$ on CIFAR-10, $64.3\%$ on ImageNet with binarized weights and activations, and a $30\%$ faster search than prior arts.
翻译:神经结构搜索(NAS)被证明是许多任务的最佳方法之一,它产生一个应用适应性神经结构,仍然受到高计算成本和内存消耗的挑战。与此同时,1位进化神经网络(CNN),其二进制重量和激活显示其潜力为资源有限的嵌入装置。一种自然的方法是使用1位CNN,以利用统一框架内每个功能的优势来减少NAS的计算和记忆成本。为此,引入了一个儿童-父母模型(CP),在全精度模型(Parrent)的监督下,用于搜索二进制结构(儿童)。在搜索阶段,儿童-父亲模型使用由儿童和家长模型生成的指标,以更不那么准确的方式评价业绩和放弃操作。在培训阶段,引入了核心一级CP损失,以优化二进化网络。广泛的实验表明,拟议的CP-NAS在全精度模型和图像网络数据库(CIFAR3)上实现了与传统的NAS的精确度相当的精确度,在以前的IMFAR3和图像数据库(ICFAR)上实现了9-10级的精确度。