We propose global context vision transformer (GC ViT), a novel architecture that enhances parameter and compute utilization for computer vision tasks. The core of the novel model are global context self-attention modules, joint with standard local self-attention, to effectively yet efficiently model both long and short-range spatial interactions, as an alternative to complex operations such as an attention masks or local windows shifting. While the local self-attention modules are responsible for modeling short-range information, the global query tokens are shared across all global self-attention modules to interact with local key and values. In addition, we address the lack of inductive bias in ViTs and improve the modeling of inter-channel dependencies by proposing a novel downsampler which leverages a parameter-efficient fused inverted residual block. The proposed GC ViT achieves new state-of-the-art performance across image classification, object detection and semantic segmentation tasks. On ImageNet-1K dataset for classification, GC ViT models with 51M, 90M and 201M parameters achieve 84.3%, 84.9% and 85.6% Top-1 accuracy, respectively, surpassing comparably-sized prior art such as CNN-based ConvNeXt and ViT-based Swin Transformer. Pre-trained GC ViT backbones in downstream tasks of object detection, instance segmentation, and semantic segmentation on MS COCO and ADE20K datasets outperform prior work consistently, sometimes by large margins.
翻译:我们提议全球背景视觉变压器(GCViT),这是一个能够提高参数参数和计算计算机视觉任务利用情况的新结构。新颖模型的核心是全球背景自我关注模块,与标准的当地自知自知系统联合,以有效而高效的方式模拟长距离和短距离空间互动,以此替代关注面罩或本地窗口移动等复杂操作。虽然本地自知模块负责模拟短距离信息,但全球查询标牌在所有全球自知模块中共享,以便与本地关键值和价值观互动。此外,我们解决了ViT中缺乏感知性偏差的问题,并改进了机制间依赖性模式的建模,为此我们提出了一个新型的降压器,利用了高参数-效率的连接长距离长距离空间互动和短距离空间互动模式,以替代关注面面面面面面面面面罩或本地分割任务。在图像分类、对象检测和语系分化的图像网-1-K数据集模型中,以51M、90M和20M参数分别以84.3%、84.9%和85.6%的Sto-Frist-Fristreal 之前检测和S-Creal-Creal-Creal-Creal-Creal-Creal-Creal-Creal-real-real-real-real-real-real-lab-lab-Creal-Creabal-Creal-Creal-