Visible light positioning (VLP) technology is a promising technique as it can provide high accuracy positioning based on the existing lighting infrastructure. However, existing approaches often require dense lighting distributions. Additionally, due to complicated indoor environments, it is still challenging to develop a robust VLP. In this work, we proposed loosely-coupled multi-sensor fusion method based on VLP and Simultaneous Localization and Mapping (SLAM), with light detection and ranging (LiDAR), odometry, and rolling shutter camera. Our method can provide accurate and robust robotics localization and navigation in LED-shortage or even outage situations. The efficacy of the proposed scheme is verified by extensive real-time experiment. The results show that our proposed scheme can provide an average accuracy of 2 cm and the average computational time in low-cost embedded platforms is around 50 ms.
翻译:可见光定位(VLP)技术是一种很有希望的技术,因为它能够在现有照明基础设施的基础上提供高精度定位,然而,现有方法往往需要密集的照明分布;此外,由于室内环境复杂,开发一个强健的VLP仍是一项挑战。在这项工作中,我们提议采用基于VLP和同声定位和绘图的松散的多传感器聚合方法,以光探测和测距(LiDAR)、odo测定和滚动百叶窗照相机为主。我们的方法可以在LED-Shortage甚至停机的情况下提供准确和稳健的机器人本地化和导航。拟议方案的功效通过广泛的实时实验得到验证。结果显示,我们提议的方案可以提供平均2厘米的精度,低成本嵌入平台的平均计算时间约为50米。