Differentiable Architecture Search (DARTS) has attracted considerable attention as a gradient-based neural architecture search method. Since the introduction of DARTS, there has been little work done on adapting the action space based on state-of-art architecture design principles for CNNs. In this work, we aim to address this gap by incrementally augmenting the DARTS search space with micro-design changes inspired by ConvNeXt and studying the trade-off between accuracy, evaluation layer count, and computational cost. We introduce the Pseudo-Inverted Bottleneck Conv (PIBConv) block intending to reduce the computational footprint of the inverted bottleneck block proposed in ConvNeXt. Our proposed architecture is much less sensitive to evaluation layer count and outperforms a DARTS network with similar size significantly, at layer counts as small as 2. Furthermore, with less layers, not only does it achieve higher accuracy with lower computational footprint (measured in GMACs) and parameter count, GradCAM comparisons show that our network can better detect distinctive features of target objects compared to DARTS. Code is available from https://github.com/mahdihosseini/PIBConv.
翻译:作为基于梯度的神经结构搜索方法,可区别的建筑搜索(DARTS)吸引了相当多的关注。自引入DARTS以来,在根据CNNIS最新建筑设计原则调整行动空间方面几乎没有做多少工作。在这项工作中,我们的目标是通过在ConvNeXt的启发下,以微设计变化逐步扩大DARSS搜索空间,并研究精确度、评价层计数和计算成本之间的取舍。我们引入了Pseudo-Inverted Bottleneck Conv(PIBConv)块,旨在减少ConvNeXt中提议的倒置瓶盖块的计算足迹。我们拟议的结构对评价层计数的敏感度要小得多,比类似规模的DARRTS网络要小得多,层数小,比小得多。此外,不仅由于计算足迹较低(在GMACs测量的)和参数计数更准确,GradCAM(PICONCM)的比较表明我们的网络能够更好地探测目标物体与DARBsimus/CODUDDOLs)的特性。可查阅http://MIS/MIS/Cards。</s>