Traditional neural architecture search (NAS) has a significant impact in computer vision by automatically designing network architectures for various tasks. In this paper, binarized neural architecture search (BNAS), with a search space of binarized convolutions, is introduced to produce extremely compressed models to reduce huge computational cost on embedded devices for edge computing. The BNAS calculation is more challenging than NAS due to the learning inefficiency caused by optimization requirements and the huge architecture space, and the performance loss when handling the wild data in various computing applications. To address these issues, we introduce operation space reduction and channel sampling into BNAS to significantly reduce the cost of searching. This is accomplished through a performance-based strategy that is robust to wild data, which is further used to abandon less potential operations. Furthermore, we introduce the Upper Confidence Bound (UCB) to solve 1-bit BNAS. Two optimization methods for binarized neural networks are used to validate the effectiveness of our BNAS. Extensive experiments demonstrate that the proposed BNAS achieves a comparable performance to NAS on both CIFAR and ImageNet databases. An accuracy of $96.53\%$ vs. $97.22\%$ is achieved on the CIFAR-10 dataset, but with a significantly compressed model, and a $40\%$ faster search than the state-of-the-art PC-DARTS. On the wild face recognition task, our binarized models achieve a performance similar to their corresponding full-precision models.
翻译:通过自动设计各种任务的网络架构,传统神经结构搜索(NAS)在计算机愿景中产生了重大影响。在本文中,二进制神经结构搜索(BNAS)采用二进制神经结构搜索(BNAS)的搜索空间,以生成极其压缩的模型,降低嵌入计算机边缘计算设备的巨大计算成本。由于优化要求和巨大的建筑空间导致的学习效率低下,以及处理各种计算机应用中的野生数据时的性能损失,因此,BNAS的计算比NAS更具有挑战性。为了解决这些问题,我们将操作空间减少和频道取样引入BNAS,以大幅降低搜索成本。这是通过一种基于性能的战略实现的,对野生数据来说是强大的,进一步用来放弃潜在操作。此外,我们引入了高级信任库(UCBBB)来解决1比比BNAS的巨额计算成本成本。使用两种优化的神经网络方法来验证我们计算机应用的野生数据的有效性。广泛的实验表明,拟议的BNAS在CIFAR和图像网络数据库中都取得了与NAS相当的类似性业绩,但40-22的精确度数据比我们快速的搜索任务要快。