Differentiable neural architecture search (DARTS), as a gradient-guided search method, greatly reduces the cost of computation and speeds up the search. In DARTS, the architecture parameters are introduced to the candidate operations, but the parameters of some weight-equipped operations may not be trained well in the initial stage, which causes unfair competition between candidate operations. The weight-free operations appear in large numbers which results in the phenomenon of performance crash. Besides, a lot of memory will be occupied during training supernet which causes the memory utilization to be low. In this paper, a partial channel connection based on channel attention for differentiable neural architecture search (ADARTS) is proposed. Some channels with higher weights are selected through the attention mechanism and sent into the operation space while the other channels are directly contacted with the processed channels. Selecting a few channels with higher attention weights can better transmit important feature information into the search space and greatly improve search efficiency and memory utilization. The instability of network structure caused by random selection can also be avoided. The experimental results show that ADARTS achieved 2.46% and 17.06% classification error rates on CIFAR-10 and CIFAR-100, respectively. ADARTS can effectively solve the problem that too many skip connections appear in the search process and obtain network structures with better performance.
翻译:不同的神经结构搜索(DARTS)是一种梯度引导的搜索方法,它极大地降低了计算成本并加快搜索速度。在DARTS中,结构参数被引入候选操作,但一些重力操作的参数在初始阶段可能没有很好地培训,导致候选操作之间的不公平竞争。大量无重量操作出现,导致性能崩溃现象。此外,在培训导致记忆利用率低的超级网的过程中,大量记忆将被占用。在本文中,根据对不同神经结构搜索(ADARSS)的频道关注度,提出了部分频道连接。一些重度较高的频道是通过关注机制选择的,并发送到操作空间,而其他频道则直接与经加工的频道联系。选择一些重度较高的频道,可以更好地将重要特征信息传送到搜索空间,大大提高搜索效率和记忆利用。随机选择造成的网络结构的不稳定性能也得以避免。实验结果表明,ADARSTS在CIFAR-10和CIARFAR-100号网络结构中实现了2.4%和17.0%的分类错误率率,可以有效地解决网络连接问题。