Nighttime environments pose significant challenges for camera-based perception, as existing methods passively rely on the scene lighting. We introduce Lighting-driven Dynamic Active Sensing (LiDAS), a closed-loop active illumination system that combines off-the-shelf visual perception models with high-definition headlights. Rather than uniformly brightening the scene, LiDAS dynamically predicts an optimal illumination field that maximizes downstream perception performance, i.e., decreasing light on empty areas to reallocate it on object regions. LiDAS enables zero-shot nighttime generalization of daytime-trained models through adaptive illumination control. Trained on synthetic data and deployed zero-shot in real-world closed-loop driving scenarios, LiDAS enables +18.7% mAP50 and +5.0% mIoU over standard low-beam at equal power. It maintains performances while reducing energy use by 40%. LiDAS complements domain-generalization methods, further strengthening robustness without retraining. By turning readily available headlights into active vision actuators, LiDAS offers a cost-effective solution to robust nighttime perception.
翻译:夜间环境对基于摄像头的感知系统构成显著挑战,因为现有方法被动依赖场景光照。本文提出光照驱动动态主动传感(LiDAS),一种结合商用视觉感知模型与高清前照灯的闭环主动照明系统。LiDAS并非均匀增强场景亮度,而是动态预测最优照明场以最大化下游感知性能,例如减少空旷区域的光照以重新分配至目标区域。通过自适应照明控制,LiDAS实现了日间训练模型的零样本夜间泛化能力。在合成数据上训练并零样本部署于真实世界闭环驾驶场景中,LiDAS在同等功耗下较标准近光灯提升18.7% mAP50与5.0% mIoU,同时保持性能并降低40%能耗。LiDAS与领域泛化方法形成互补,无需重新训练即可进一步增强鲁棒性。通过将现有前照灯转化为主动视觉执行器,LiDAS为鲁棒的夜间感知提供了经济高效的解决方案。