With the recent development of autonomous driving technology, as the pursuit of efficiency for repetitive tasks and the value of non-face-to-face services increase, mobile service robots such as delivery robots and serving robots attract attention, and their demands are increasing day by day. However, when something goes wrong, most commercial serving robots need to return to their starting position and orientation to operate normally again. In this paper, we focus on end-to-end relocalization of serving robots to address the problem. It is to predict robot pose directly from only the onboard sensor data using neural networks. In particular, we propose a deep neural network architecture for the relocalization based on camera-2D LiDAR sensor fusion. We call the proposed method FusionLoc. In the proposed method, the multi-head self-attention complements different types of information captured by the two sensors. Our experiments on a dataset collected by a commercial serving robot demonstrate that FusionLoc can provide better performances than previous relocalization methods taking only a single image or a 2D LiDAR point cloud as well as a straightforward fusion method concatenating their features.
翻译:随着最近自主驾驶技术的发展,随着重复性任务效率的提高和非面对面服务价值的提高,移动服务机器人,如送货机器人和为机器人服务的机器人等,日复一日地引起注意,而且它们的需求正在增加。然而,当出现问题时,大多数商业性服务机器人需要回到其起始位置和方向,才能再次正常运行。在本文件中,我们侧重于服务机器人的端到端重新定位,以解决这一问题。它只能预测机器人直接来自使用神经网络的机载传感器数据。特别是,我们提议一个基于相机-2DLIDAR传感器聚合的深神经网络结构,用于重新定位。我们称之为拟议方法FusionLoc。在拟议方法中,多头自留机机器人需要补充两个传感器收集的不同类型信息。我们在由商业服务机器人收集的数据集上进行的实验表明,FusionLoc比以往只使用单一图像或2DLDAR点云的重新定位方法提供更好的性能。我们建议,作为直接组合方法,可以将两个传感器的特性配置。</s>