The Vision Transformer (ViT) architecture has established its place in computer vision literature, however, training ViTs for RGB-D object recognition remains an understudied topic, viewed in recent literature only through the lens of multi-task pretraining in multiple vision modalities. Such approaches are often computationally intensive, relying on the scale of multiple pretraining datasets to align RGB with 3D information. In this work, we propose a simple yet strong recipe for transferring pretrained ViTs in RGB-D domains for 3D object recognition, focusing on fusing RGB and depth representations encoded jointly by the ViT. Compared to previous works in multimodal Transformers, the key challenge here is to use the attested flexibility of ViTs to capture cross-modal interactions at the downstream and not the pretraining stage. We explore which depth representation is better in terms of resulting accuracy and compare early and late fusion techniques for aligning the RGB and depth modalities within the ViT architecture. Experimental results in the Washington RGB-D Objects dataset (ROD) demonstrate that in such RGB -> RGB-D scenarios, late fusion techniques work better than most popularly employed early fusion. With our transfer baseline, fusion ViTs score up to 95.4% top-1 accuracy in ROD, achieving new state-of-the-art results in this benchmark. We further show the benefits of using our multimodal fusion baseline over unimodal feature extractors in a synthetic-to-real visual adaptation as well as in an open-ended lifelong learning scenario in the ROD benchmark, where our model outperforms previous works by a margin of >8%. Finally, we integrate our method with a robot framework and demonstrate how it can serve as a perception utility in an interactive robot learning scenario, both in simulation and with a real robot.
翻译:愿景变异器(VIT)架构在计算机视觉文献文献中确立了其位置,然而,为 RGB-D 对象识别而培训 VIT 的ViT 仍然是一个研究不足的专题,最近文献中仅通过多种视觉模式的多任务预培训透视镜来看待。这些方法往往在计算上是密集的,依靠多个预培训数据集的规模,使 RGB 与 3D 信息相匹配。在这项工作中,我们提出了一个简单而有力的配方,用于在 RGB-D 域中转让经过预先训练的ViT,用于3D 对象识别,重点是使用由 ViT 联合编码的 RGB-D 和深度表达。与以前在多式联运变异器中的工作相比,这里的关键挑战是使用 ViT 的经证明的灵活性来捕捉下游而非培训前阶段的跨模式互动。我们探索了哪个深度代表在产生准确性和比较早期和晚变异变技术以在 ViT 模型中调整 RGB-D 和深度模式中, 实验结果显示我们在RGB-D-D 的模型模型中, 最先变异化 将一个我们的标准- 的模型化 的模型转换为最接近化的升级的模型,最后的排序方法显示了我们在RVILOD</s>