LiDAR can capture accurate depth information in large-scale scenarios without the effect of light conditions, and the captured point cloud contains gait-related 3D geometric properties and dynamic motion characteristics. We make the first attempt to leverage LiDAR to remedy the limitation of view-dependent and light-sensitive camera for more robust and accurate gait recognition. In this paper, we propose a LiDAR-camera-based gait recognition method with an effective multi-modal feature fusion strategy, which fully exploits advantages of both point clouds and images. In particular, we propose a new in-the-wild gait dataset, LiCamGait, involving multi-modal visual data and diverse 2D/3D representations. Our method achieves state-of-the-art performance on the new dataset. Code and dataset will be released when this paper is published.
翻译:LiDAR可以在没有光条件影响的情况下,在大型情景中获取准确的深度信息,而捕获的点云含有与步态有关的3D几何特性和动态运动特性。我们第一次试图利用LIDAR来补救视景相向和对光敏感相机的局限性,以便更有力和准确地识别步态。在本文中,我们建议使用基于LIDAR摄像机的动作识别方法,并采用有效的多模式特征聚合战略,充分利用点云和图像的优势。特别是,我们提议采用新的全方位网格数据集,即LICamGait,其中涉及多式视觉数据和多种2D/3D表示方式。我们的方法在新数据集上实现了最先进的性能。当本文件发表时,将公布代码和数据集。