Modern high-scoring models of vision in the brain score competition do not stem from Vision Transformers. However, in this paper, we provide evidence against the unexpected trend of Vision Transformers (ViT) being not perceptually aligned with human visual representations by showing how a dual-stream Transformer, a CrossViT$~\textit{a la}$ Chen et al. (2021), under a joint rotationally-invariant and adversarial optimization procedure yields 2nd place in the aggregate Brain-Score 2022 competition(Schrimpf et al., 2020b) averaged across all visual categories, and at the time of the competition held 1st place for the highest explainable variance of area V4. In addition, our current Transformer-based model also achieves greater explainable variance for areas V4, IT and Behaviour than a biologically-inspired CNN (ResNet50) that integrates a frontal V1-like computation module (Dapello et al.,2020). To assess the contribution of the optimization scheme with respect to the CrossViT architecture, we perform several additional experiments on differently optimized CrossViT's regarding adversarial robustness, common corruption benchmarks, mid-ventral stimuli interpretation and feature inversion. Against our initial expectations, our family of results provides tentative support for an $\textit{"All roads lead to Rome"}$ argument enforced via a joint optimization rule even for non biologically-motivated models of vision such as Vision Transformers. Code is available at https://github.com/williamberrios/BrainScore-Transformers
翻译:现代高分大脑分数的视觉模型并非来自视觉变异器。然而,在本文件中,我们提供了证据,证明视觉变异器(Viet)与人类视觉表现不相容的意想不到趋势。 此外,我们目前基于变异器的模型在双流变异器(Cross-Vet$_ ⁇ textit{a la} Chen et al. (2021)下,在将前V1类计算模块(Dapello et al.,202020)整合在一起的双流变异和对抗性优化程序,在2022年脑-Score竞争(Schrimpf et al.,2020b.)中,在所有视觉变异类别中(V4)平均,在竞争中位时,对视觉变异性变异器(V4)的视觉变异器(Creviltal-ralliveral ruilityal ruilal rubal rubal roads)中, 提供了我们共同变动的货币变压/变动的货币变压规则/变压的模型。