Recently many effective self-attention modules are proposed to boot the model performance by exploiting the internal information of convolutional neural networks in computer vision. In general, many previous works ignore considering the design of the pooling strategy of the self-attention mechanism since they adopt the global average pooling for granted, which hinders the further improvement of the performance of the self-attention mechanism. However, we empirically find and verify a phenomenon that the simple linear combination of global max-pooling and global min-pooling can produce pooling strategies that match or exceed the performance of global average pooling. Based on this empirical observation, we propose a simple-yet-effective self-attention module SPENet, which adopts a self-adaptive pooling strategy based on global max-pooling and global min-pooling and a lightweight module for producing the attention map. The effectiveness of SPENet is demonstrated by extensive experiments on widely used benchmark datasets and popular self-attention networks.
翻译:最近许多有效的自我关注模块被提议通过利用计算机愿景中演变神经网络的内部信息来启动模型性能,总体而言,许多先前的工作忽视了考虑自留机制集合战略的设计,因为它们采用全球平均自动集合,这妨碍了自留机制的绩效的进一步改善。然而,我们从经验中发现并核实了一个现象,即全球最大集合和全球最小集合的简单线性组合可以产生与全球平均集合性能相匹配或超过的集合性能的集合性战略。根据这一经验观察,我们建议采用一个简单而有效的自留模块Spenet,采用以全球最大集合和全球最小集合为基础的自我适应联合战略,以及制作关注性地图的轻度模块。关于广泛使用的基准数据集和大众自留网络的广泛实验证明了SPenet的有效性。