Transformer is a new kind of neural architecture which encodes the input data as powerful features via the attention mechanism. Basically, the visual transformers first divide the input images into several local patches and then calculate both representations and their relationship. Since natural images are of high complexity with abundant detail and color information, the granularity of the patch dividing is not fine enough for excavating features of objects in different scales and locations. In this paper, we point out that the attention inside these local patches are also essential for building visual transformers with high performance and we explore a new architecture, namely, Transformer iN Transformer (TNT). Specifically, we regard the local patches (e.g., 16$\times$16) as "visual sentences" and present to further divide them into smaller patches (e.g., 4$\times$4) as "visual words". The attention of each word will be calculated with other words in the given visual sentence with negligible computational costs. Features of both words and sentences will be aggregated to enhance the representation ability. Experiments on several benchmarks demonstrate the effectiveness of the proposed TNT architecture, e.g., we achieve an $81.5%$ top-1 accuracy on the ImageNet, which is about $1.7%$ higher than that of the state-of-the-art visual transformer with similar computational cost. The PyTorch code is available at https://github.com/huawei-noah/CV-Backbones/tree/master/tnt_pytorch, and the MindSpore code is at https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/TNT.
翻译:变形器是一种新型的神经结构, 通过关注机制将输入数据编码为强大的功能。 基本上, 视觉变压器首先将输入图像分解为几个本地补丁, 然后计算其表达方式和关系。 由于自然图像复杂, 细节和颜色信息丰富, 修压器的颗粒性不足以在不同的尺度和地点挖掘物体的特性。 在本文中, 我们指出, 这些本地补丁内部的注意对于建设具有高性能的视觉变压器也是必不可少的, 我们探索一个新的架构, 即 变换器 iN变压器/ 变压器( TNT)。 具体地说, 我们把本地补丁( 如, 16$\ times 16) 视为“ 视觉句子”, 并将其进一步分为更小的补丁( 如, 4美元\ timets 4) 。 每个字的注意将用给定的视觉句中的其他字来计算, 可忽略的计算成本。 两字和句的精度都将归为增强代表能力。 。 在几个基准上进行实验时, TNT/ NS 的变压/ 直流/ 直径/ 直径 的计算, 直径/ 的精度 结构的精度为: 的精度, 我们的精度为最高的精确度为8- 。