Robot vision is greatly affected by occlusions, which poses challenges to autonomous systems. The robot itself may hide targets of interest from the camera, while it moves within the field of view, leading to failures in task execution. For example, if a target of interest is partially occluded by the robot, detecting and grasping it correctly, becomes very challenging. To solve this problem, we propose a computationally lightweight method to determine the areas that the robot occludes. For this purpose, we use the Unified Robot Description Format (URDF) to generate a virtual depth image of the 3D robot model. Using the virtual depth image, we can effectively determine the partially occluded areas to improve the robustness of the information given by the perception system. Due to the real-time capabilities of the method, it can successfully detect occlusions of moving targets by the moving robot. We validate the effectiveness of the method in an experimental setup using a 6-DoF robot arm and an RGB-D camera by detecting and handling occlusions for two tasks: Pose estimation of a moving object for pickup and human tracking for robot handover. The code is available in \url{https://github.com/auth-arl/virtual\_depth\_image}.
翻译:机器人的视觉受到隐蔽作用的极大影响,它会给自动系统带来挑战。机器人本身可能隐藏摄像头中引起兴趣的目标,而它却在视野内移动,导致任务执行失败。例如,如果一个受关注的目标被机器人部分隐蔽,能够正确探测和捕捉,就会变得非常具有挑战性。为了解决这个问题,我们建议采用计算轻量方法来确定机器人隐蔽的区域。为此目的,我们使用统一机器人描述格式(URDF)来生成3D机器人模型的虚拟深度图像。我们使用虚拟深度图像,可以有效地确定部分隐蔽的区域,以提高感知系统提供的信息的稳健性。由于该方法的实时能力,它能够成功地检测到移动机器人目标的隐蔽性。我们用6-DoF机器人臂和RGB-D相机来验证实验设置方法的有效性,我们通过检测和处理两种任务:对移动对象进行估计,以便提取和人类追踪机器人的机器人转移。代码在@ amqirus/ commur_ {irma_ {irma___irs_ir_ir_r=r=r=r=