Traditional self-attention mechanisms in convolutional networks tend to use only the output of the previous layer as input to the attention network, such as SENet, CBAM, etc. In this paper, we propose a new attention modification method that tries to get the output of the classification network in advance and use it as a part of the input of the attention network. We used the auxiliary classifier proposed in GoogLeNet to obtain the results in advance and pass them into attention networks. we added this mechanism to SE-ResNet for our experiments and achieved a classification accuracy improvement of at most 1.94% on cifar100.
翻译:在本文中,我们提出了一种新的关注修改方法,试图事先获得分类网络的产出,并将其作为关注网络的一部分投入使用。我们使用GoogLeNet中提议的辅助分类器事先获取结果并将其传送到关注网络。我们把这一机制添加到SE-ResNet,用于我们的实验,并在cifar100上实现了最高1.94%的分类精确度改进。