Transformer networks have achieved great progress for computer vision tasks. Transformer-in-Transformer (TNT) architecture utilizes inner transformer and outer transformer to extract both local and global representations. In this work, we present new TNT baselines by introducing two advanced designs: 1) pyramid architecture, and 2) convolutional stem. The new "PyramidTNT" significantly improves the original TNT by establishing hierarchical representations. PyramidTNT achieves better performances than the previous state-of-the-art vision transformers such as Swin Transformer. We hope this new baseline will be helpful to the further research and application of vision transformer. Code will be available at https://github.com/huawei-noah/CV-Backbones/tree/master/tnt_pytorch.
翻译:变异器网络在计算机愿景任务方面取得了巨大进展。 变异器( TNT) 结构利用内变异器和外变异器来提取本地和全球代表。 在这项工作中,我们通过引入两个先进的设计来介绍新的TNT基线:(1) 金字塔结构,和(2) 革命干。 新的“ 金字塔”通过建立等级代表,大大改善了原来的TNT。 金字塔( PyramidTNT)比以前最先进的变异器( 如 Swin变异器) 取得更好的业绩。 我们希望这个新的基线将有助于对变异器的进一步研究和应用。 代码将在 https:// github.com/huawei-noah/CV-Backbones/tree/master/tnt_pytorch。