In this paper, we propose Broad Neural Architecture Search (BNAS) where we elaborately design broad scalable architecture dubbed Broad Convolutional Neural Network (BCNN) to solve the above issue. On one hand, the proposed broad scalable architecture has fast training speed due to its shallow topology. Moreover, we also adopt reinforcement learning and parameter sharing used in ENAS as the optimization strategy of BNAS. Hence, the proposed approach can achieve higher search efficiency. On the other hand, the broad scalable architecture extracts multi-scale features and enhancement representations, and feeds them into global average pooling layer to yield more reasonable and comprehensive representations. Therefore, the performance of broad scalable architecture can be promised. In particular, we also develop two variants for BNAS who modify the topology of BCNN. In order to verify the effectiveness of BNAS, several experiments are performed and experimental results show that 1) BNAS delivers 0.19 day which is 2.37x less expensive than ENAS who ranks the best in reinforcement learning based NAS approaches, 2) compared with several small-sized (about 0.5 and 1.1 millions parameters) models, the architecture learned by BNAS obtains state-of-the-art performance (3.58% and 3.24% test error) on CIFAR-10, 3) the learned architecture achieves 25.3% top-1 error on ImageNet just using 3.9 millions parameters.
翻译:在本文中,我们提出了宽度神经结构搜索(BNAS),我们在此文件中详细设计了广度可扩缩的建筑,称为广度革命神经网络(BNNN),以解决上述问题。一方面,拟议的广度可扩缩建筑由于其浅表层学而具有快速培训速度。此外,我们还采用了ENAS中用作BNAS优化战略的强化学习和参数共享方法。因此,拟议方法可以实现更高的搜索效率。另一方面,宽度可扩缩结构提取了多度特征和增强表示,并将其纳入全球平均集合层,以产生更合理和更全面的表述。因此,可以承诺实现宽度可扩缩结构的绩效。特别是,我们还为BNAS开发了两种变异模式,以修改BNAS的地形学说。为了验证BNAS的效能,我们进行了一些实验,实验结果显示:(1) BNAS提供0.19天,这比ENAS在强化基于NAS的方法中排名最便宜的2.37x,比ENAS要低,2,与几个小规模(约0.5和1.1万个参数)的模型(约0.5和1.1百万个参数)。