During teleoperation of a mobile robot, providing good operator situation awareness is a major concern as a single mistake can lead to mission failure. Camera streams are widely used for teleoperation but offer limited field-of-view. In this paper, we present a flexible framework for virtual projections to increase situation awareness based on a novel method to fuse multiple cameras mounted anywhere on the robot. Moreover, we propose a complementary approach to improve scene understanding by fusing camera images and geometric 3D Lidar data to obtain a colorized point cloud. The implementation on a compact omnidirectional camera reduces system complexity considerably and solves multiple use-cases on a much smaller footprint compared to traditional approaches such as actuated pan-tilt units. Finally, we demonstrate the generality of the approach by application to the multi-camera system of the Boston Dynamics Spot. The software implementation is available as open-source ROS packages on the project page https://tu-darmstadt-ros-pkg.github.io/omnidirectional_vision.
翻译:在移动机器人的远程操作期间,提供良好的操作人员状况认识是一个重大关切问题,因为单一错误可能导致飞行任务失败。相机流被广泛用于远程操作,但提供有限的视野。在本文中,我们提出了一个灵活的虚拟预测框架,以便根据在机器人上任何地方安装的装配多摄像头的新颖方法,提高对局势的了解;此外,我们提出一种补充办法,通过使用相机图像和几何 3D Lidar数据来获取有色点云来提高对现场的了解。在紧凑的全射线照相机上安装了摄像头,大大降低了系统的复杂性,并解决了与传统方法(如操作式泛平台装置)相比更小得多的多使用案例。最后,我们通过应用波士顿动态点的多摄像系统,展示了这一方法的普遍性。在项目网页https://tu-darmstadt-ros-pkgng.github.io/omnidentional_vision上作为开放源的ROS包件。