Inspired by the great success achieved by CNN in image recognition, view-based methods applied CNNs to model the projected views for 3D object understanding and achieved excellent performance. Nevertheless, multi-view CNN models cannot model the communications between patches from different views, limiting its effectiveness in 3D object recognition. Inspired by the recent success gained by vision Transformer in image recognition, we propose a Multi-view Vision Transformer (MVT) for 3D object recognition. Since each patch feature in a Transformer block has a global reception field, it naturally achieves communications between patches from different views. Meanwhile, it takes much less inductive bias compared with its CNN counterparts. Considering both effectiveness and efficiency, we develop a global-local structure for our MVT. Our experiments on two public benchmarks, ModelNet40 and ModelNet10, demonstrate the competitive performance of our MVT.
翻译:受CNN在图像识别方面取得的巨大成功启迪,基于视觉的方法运用CNN对3D对象理解和取得出色业绩的预测观点进行模拟,然而,多视角CNN模型无法模拟不同观点之间的沟通,限制了其在3D对象识别方面的效力。受视觉变异器在图像识别方面最近取得的成功的启发,我们提议为3D对象识别建立一个多视角变异器(MVT),因为变异器块中的每个补丁特征都有一个全球接收场,自然可以实现不同观点的补丁之间的沟通。与此同时,它与CNN对口者相比,其隐含的偏见要少得多。考虑到效力和效率,我们为MVT开发了一个全球-地方结构。我们在两个公共基准,即模型Net40和模型Net10上进行的实验展示了我们MVT的竞争性表现。