Lidar-based SLAM systems are highly sensitive to adverse conditions such as occlusion, noise, and field-of-view (FoV) degradation, yet existing robustness evaluation methods either lack physical grounding or do not capture sensor-specific behavior. This paper presents a sensor-aware, phenomenological framework for simulating interpretable lidar degradations directly on real point clouds, enabling controlled and reproducible SLAM stress testing. Unlike image-derived corruption benchmarks (e.g., SemanticKITTI-C) or simulation-only approaches (e.g., lidarsim), the proposed system preserves per-point geometry, intensity, and temporal structure while applying structured dropout, FoV reduction, Gaussian noise, occlusion masking, sparsification, and motion distortion. The framework features autonomous topic and sensor detection, modular configuration with four severity tiers (light--extreme), and real-time performance (less than 20 ms per frame) compatible with ROS workflows. Experimental validation across three lidar architectures and five state-of-the-art SLAM systems reveals distinct robustness patterns shaped by sensor design and environmental context. The open-source implementation provides a practical foundation for benchmarking lidar-based SLAM under physically meaningful degradation scenarios.
翻译:基于激光雷达的SLAM系统对遮挡、噪声和视场退化等不利条件高度敏感,然而现有的鲁棒性评估方法要么缺乏物理基础,要么未能捕捉传感器特异性行为。本文提出一种传感器感知的现象学框架,可直接在真实点云上模拟可解释的激光雷达退化,实现可控且可复现的SLAM压力测试。与基于图像的损坏基准(如SemanticKITTI-C)或纯仿真方法(如lidarsim)不同,该系统在应用结构化丢弃、视场缩减、高斯噪声、遮挡掩蔽、稀疏化及运动畸变的同时,保留了逐点几何结构、强度信息和时序特征。该框架具备自主主题与传感器检测功能,采用四级严重程度(轻度至极端)的模块化配置,并实现与ROS工作流兼容的实时性能(每帧处理时间小于20毫秒)。通过对三种激光雷达架构和五种前沿SLAM系统的实验验证,揭示了由传感器设计与环境背景塑造的差异化鲁棒性模式。开源实现为在具有物理意义的退化场景下评估激光雷达SLAM系统提供了实用基准平台。