The structural re-parameterization (SRP) technique is a novel deep learning technique that achieves interconversion between different network architectures through equivalent parameter transformations. This technique enables the mitigation of the extra costs for performance improvement during training, such as parameter size and inference time, through these transformations during inference, and therefore SRP has great potential for industrial and practical applications. The existing SRP methods have successfully considered many commonly used architectures, such as normalizations, pooling methods, multi-branch convolution. However, the widely used self-attention modules cannot be directly implemented by SRP due to these modules usually act on the backbone network in a multiplicative manner and the modules' output is input-dependent during inference, which limits the application scenarios of SRP. In this paper, we conduct extensive experiments from a statistical perspective and discover an interesting phenomenon Stripe Observation, which reveals that channel attention values quickly approach some constant vectors during training. This observation inspires us to propose a simple-yet-effective attention-alike structural re-parameterization (ASR) that allows us to achieve SRP for a given network while enjoying the effectiveness of the self-attention mechanism. Extensive experiments conducted on several standard benchmarks demonstrate the effectiveness of ASR in generally improving the performance of existing backbone networks, self-attention modules, and SRP methods without any elaborated model crafting. We also analyze the limitations and provide experimental or theoretical evidence for the strong robustness of the proposed ASR.
翻译:结构重参数化(SRP)技术是一种新颖的深度学习技术,通过等效的参数变换实现不同网络结构之间的互相转换。SRP技术使得可以通过这些变换在推断过程中缓解训练过程中性能提升的额外成本,例如参数大小和推断时间,因此,SRP在工业和实践应用方面具有巨大的潜力。现有的SRP方法已成功考虑了许多常用的架构,例如归一化、池化方法和多分支卷积。然而,广泛使用的self-attention模块由于通常以乘法方式作用于骨干网络,并且在推断过程中其输出是依赖于输入的,因此无法直接使用SRP,这限制了SRP的应用场景。在本文中,我们从统计学的角度进行了广泛的实验,发现了一个有趣的现象Stripe Observation,它揭示出通道注意值在训练过程中很快接近一些常数向量。这个发现启发我们提出了一种简单而有效的注意力模块支持的结构重参数化(ASR)方法,它使我们能够在给定的网络中实现SRP,并享受自注意力机制的有效性。在几个标准基准测试上进行的广泛实验证明了ASR在普遍提高现有骨干网络、self-attention模块和SRP方法的性能方面的有效性,而不需要进行任何精心的模型构建。我们还分析了ASR的限制,并提供了实验或理论证据,证明了所提出的ASR的强大鲁棒性。