Visible light positioning (VLP) technology is a promising technique as it can provide high accuracy positioning based on the existing lighting infrastructure. However, existing approaches often require dense lighting distributions. Additionally, due to complicated indoor environments, it is still challenging to develop a robust VLP. In this work, we proposed loosely-coupled multi-sensor fusion method based on VLP and Simultaneous Localization and Mapping (SLAM), with light detection and ranging (LiDAR), odometry, and rolling shutter camera. Our method can provide accurate and robust robotics localization and navigation in LED-shortage or even outage situations. The efficacy of the proposed scheme is verified by extensive real-time experiment 1 . The results show that our proposed scheme can provide an average accuracy of 2 cm and the average computational time in low-cost embedded platforms is around 50 ms.
翻译:可见光定位(VLP)技术是一种很有希望的技术,因为它能够在现有照明基础设施的基础上提供高精度定位,然而,现有方法往往需要密集的照明分布;此外,由于室内环境复杂,开发一个强健的VLP仍是一项挑战。 在这项工作中,我们提出了基于VLP和同声定位和绘图(SLAM)的松散的多传感器聚合法,其中光探测和测距(LiDAR)、odo测定和滚动百叶窗照相机可以提供精确和稳健的机器人在LED-Shortage或甚至停机情况下的定位和导航。拟议办法的功效通过广泛的实时实验得到验证。结果显示,我们拟议的办法可以提供平均2厘米的精度,低成本嵌入平台的平均计算时间约为50米。