Transparent object perception is a crucial skill for applications such as robot manipulation in household and laboratory settings. Existing methods utilize RGB-D or stereo inputs to handle a subset of perception tasks including depth and pose estimation. However, transparent object perception remains to be an open problem. In this paper, we forgo the unreliable depth map from RGB-D sensors and extend the stereo based method. Our proposed method, MVTrans, is an end-to-end multi-view architecture with multiple perception capabilities, including depth estimation, segmentation, and pose estimation. Additionally, we establish a novel procedural photo-realistic dataset generation pipeline and create a large-scale transparent object detection dataset, Syn-TODD, which is suitable for training networks with all three modalities, RGB-D, stereo and multi-view RGB. Project Site: https://ac-rad.github.io/MVTrans/
翻译:透明物体感知是家庭和实验室环境中机器人操纵等应用的关键技能。现有方法使用 RGB-D 或立体输入来处理包括深度和估计在内的一系列感知任务。然而,透明物体感知仍然是一个尚未解决的问题。在本文中,我们放弃RGB-D传感器不可靠的深度地图,并推广立体法。我们提议的MVTrans是一个端到端多视结构,具有多重感知能力,包括深度估计、分解和估计。此外,我们建立了一个新的程序上摄影现实数据集生成管道,并建立了一个大规模透明的物体探测数据集,即Syn-TODD,适合于所有三种模式的培训网络,即RGB-D、立体和多视图RGB。项目网站:https://ac-rad.github.io/MVTrans/。