We propose a self-supervised training approach for learning view-invariant dense visual descriptors using image augmentations. Unlike existing works, which often require complex datasets, such as registered RGBD sequences, we train on an unordered set of RGB images. This allows for learning from a single camera view, e.g., in an existing robotic cell with a fix-mounted camera. We create synthetic views and dense pixel correspondences using data augmentations. We find our descriptors are competitive to the existing methods, despite the simpler data recording and setup requirements. We show that training on synthetic correspondences provides descriptor consistency across a broad range of camera views. We compare against training with geometric correspondence from multiple views and provide ablation studies. We also show a robotic bin-picking experiment using descriptors learned from a fix-mounted camera for defining grasp preferences.
翻译:我们建议采用自我监督的培训方法,利用图像放大来学习视觉变化中的密集视觉描述器。与现有的工作不同,这些工作往往需要复杂的数据集,例如注册的 RGBD 序列,我们用未经排序的一套 RGB 图像来培训。这可以从单一的相机视图中学习,例如,在一个装有固定相机的现有机器人细胞中学习。我们用数据放大来创建合成视图和密集像素通信。我们发现我们的描述器对现有的方法具有竞争力,尽管数据记录和设置要求更简便。我们表明,合成通信培训在广泛的相机视图中提供了描述符的一致性。我们比较了从多重视角与几何学通信的培训,并提供了模拟研究。我们还展示了利用从固定架立相机中学习的脱钩器来确定抓取偏好点的机器人双选实验。