The Vision Transformer (ViT) architecture has recently established its place in the computer vision literature, with multiple architectures for recognition of image data or other visual modalities. However, training ViTs for RGB-D object recognition remains an understudied topic, viewed in recent literature only through the lens of multi-task pretraining in multiple modalities. Such approaches are often computationally intensive and have not yet been applied for challenging object-level classification tasks. In this work, we propose a simple yet strong recipe for transferring pretrained ViTs in RGB-D domains for single-view 3D object recognition, focusing on fusing RGB and depth representations encoded jointly by the ViT. Compared to previous works in multimodal Transformers, the key challenge here is to use the atested flexibility of ViTs to capture cross-modal interactions at the downstream and not the pretraining stage. We explore which depth representation is better in terms of resulting accuracy and compare two methods for injecting RGB-D fusion within the ViT architecture (i.e., early vs. late fusion). Our results in the Washington RGB-D Objects dataset demonstrates that in such RGB $\rightarrow$ RGB-D scenarios, late fusion techniques work better than most popularly employed early fusion. With our transfer baseline, adapted ViTs score up to 95.1\% top-1 accuracy in Washington, achieving new state-of-the-art results in this benchmark. We additionally evaluate our approach with an open-ended lifelong learning protocol, where we show that our adapted RGB-D encoder leads to features that outperform unimodal encoders, even without explicit fine-tuning. We further integrate our method with a robot framework and demonstrate how it can serve as a perception utility in an interactive robot learning scenario, both in simulation and with a real robot.
翻译:愿景变换器( Viet) 架构最近在计算机视觉文献中确立了其位置, 并设置了承认图像数据或其他视觉模式的多重结构。 然而, 培训 VIT 进行 RGB- D 对象识别仍然是一个研究不足的话题, 最近的文献中仅通过多任务预培训的多模式透镜来审视。 这种方法往往在计算上十分密集, 尚未应用于具有挑战性的目标级分类任务 。 在这项工作中, 我们提出了一个简单而有力的配方, 用于在 RGB- D 域中传输经过预先训练的 VIT, 用于单一视图 3D 对象识别, 重点是使用 RGB- D 和由 ViT 联合编码的深度表达。 与以前在 Mondald 变换器中的工作相比, 关键是使用经过测试的 ViT 来捕捉下游而非培训前阶段的跨模式互动。 我们探索了哪个深度代表在产生准确性和比较两种方法的方法, 在 ViT 框架中将 RGB- D 的 RGB- d- dreal develop commodeal lading a view laft view lax laft laft view laud laud lax lax laudent laud laud laud laud laud lax lax laud laudal laudents lautd lautd lautd laudents cument lady lady showd showd