In this paper, we propose Broad Neural Architecture Search (BNAS) where we elaborately design broad scalable architecture dubbed Broad Convolutional Neural Network (BCNN) to solve the above issue. On one hand, the proposed broad scalable architecture has fast training speed due to its shallow topology. Moreover, we also adopt reinforcement learning and parameter sharing used in ENAS as the optimization strategy of BNAS. Hence, the proposed approach can achieve higher search efficiency. On the other hand, the broad scalable architecture extracts multi-scale features and enhancement representations, and feeds them into global average pooling layer to yield more reasonable and comprehensive representations. Therefore, the performance of broad scalable architecture can be promised. In particular, we also develop two variants for BNAS who modify the topology of BCNN. In order to verify the effectiveness of BNAS, several experiments are performed and experimental results show that 1) BNAS delivers 0.19 days which is 2.37x less expensive than ENAS who ranks the best in reinforcement learning-based NAS approaches, 2) compared with small-size (0.5 millions parameters) and medium-size (1.1 millions parameters) models, the architecture learned by BNAS obtains state-of-the-art performance (3.58% and 3.24% test error) on CIFAR-10, 3) the learned architecture achieves 25.3% top-1 error on ImageNet just using 3.9 millions parameters.
翻译:在本文中,我们提出了宽度神经结构搜索(BNAS),我们在此文件中详细设计了广度可扩缩的建筑,称为大革命神经网络(BNNN),以解决上述问题。一方面,拟议的广度可扩缩建筑由于其浅表层学而具有快速培训速度。此外,我们还采用了ENAS中用作BNAS优化战略的强化学习和参数共享方法。因此,拟议方法可以实现更高的搜索效率。另一方面,宽度可扩缩结构提取了多种规模的特征和增强的表示,并把它们注入全球平均集合层,以产生更合理和全面的表达。因此,可以承诺实现宽度可扩缩结构的绩效。特别是,我们还为BNAS开发了两种变异模式,以修改BNAS的表层学。为了验证BNAS的效果,我们进行了一些实验,实验结果显示:(1) BNAS提供0.19天,比ENAS在强化学习型国家系统方法中排名最低2.37倍,2,而小规模(0.5百万个参数)和中等可扩缩结构(1.1百万个),使用3至10万个测试模型。