Humans are remarkably flexible in understanding viewpoint changes due to visual cortex supporting the perception of 3D structure. In contrast, most of the computer vision models that learn visual representation from a pool of 2D images often fail to generalize over novel camera viewpoints. Recently, the vision architectures have shifted towards convolution-free architectures, visual Transformers, which operate on tokens derived from image patches. However, these Transformers do not perform explicit operations to learn viewpoint-agnostic representation for visual understanding. To this end, we propose a 3D Token Representation Layer (3DTRL) that estimates the 3D positional information of the visual tokens and leverages it for learning viewpoint-agnostic representations. The key elements of 3DTRL include a pseudo-depth estimator and a learned camera matrix to impose geometric transformations on the tokens, trained in an unsupervised fashion. These enable 3DTRL to recover the 3D positional information of the tokens from 2D patches. In practice, 3DTRL is easily plugged-in into a Transformer. Our experiments demonstrate the effectiveness of 3DTRL in many vision tasks including image classification, multi-view video alignment, and action recognition. The models with 3DTRL outperform their backbone Transformers in all the tasks with minimal added computation. Our code is available at https://github.com/elicassion/3DTRL.
翻译:由于视觉皮层支持了对 3D 结构的认知,人类在理解观点变化方面非常灵活。 相反,从 2D 图像库中学习视觉表现的计算机视觉模型大多无法对新相机观点进行概括化。 最近,视觉结构已经转向无革命结构、视觉变异器,这些结构在图像补丁衍生的象征物上运作。然而,这些变异器没有进行明确的操作,以学习视觉理解的视觉-不可知性代表。为此,我们提议3D Token 代表层 (3DTRL) 来估计视觉符号的3D 位置信息,并利用它来学习视觉- 视觉表现。 3DTRL 的关键要素包括一个假深度的图像显示器和一个学习的相机矩阵矩阵, 以不受监督的方式对符号进行几何转换。 使 3DTRL 能够从 2D 补接合处恢复3D 标记的定位信息。 实际上, 3DTRL 很容易将3D 分类插入到一个变异器, 3D 模型显示我们许多图像模型的校正对3TRL 的校正动作的校正 。