Low-light video enhancement (LLVE) is an important yet challenging task with many applications such as photographing and autonomous driving. Unlike single image low-light enhancement, most LLVE methods utilize temporal information from adjacent frames to restore the color and remove the noise of the target frame. However, these algorithms, based on the framework of multi-frame alignment and enhancement, may produce multi-frame fusion artifacts when encountering extreme low light or fast motion. In this paper, inspired by the low latency and high dynamic range of events, we use synthetic events from multiple frames to guide the enhancement and restoration of low-light videos. Our method contains three stages: 1) event synthesis and enhancement, 2) event and image fusion, and 3) low-light enhancement. In this framework, we design two novel modules (event-image fusion transform and event-guided dual branch) for the second and third stages, respectively. Extensive experiments show that our method outperforms existing low-light video or single image enhancement approaches on both synthetic and real LLVE datasets.
翻译:低光视频增强( LLVE ) 是一项重要但富有挑战性的任务, 有许多应用工具, 如照片和自主驱动。 与单一图像低光增强不同, 多数 LLVE 方法使用相邻框架的时间信息来恢复颜色并消除目标框架的噪音。 然而, 这些基于多框架调整和增强框架的算法, 在遇到极低光或快速运动时, 可能会产生多框架聚合工艺。 在本文中, 受低浮度和高动态事件的影响, 我们使用多个框架的合成事件来指导低光视频的增强和恢复。 我们的方法包含三个阶段:1) 事件合成和增强, 2) 事件和图像聚合, 以及 3) 低光增强。 在此框架内, 我们分别为第二和第三阶段设计了两个新型模块( 动态模拟组合变换和事件引导双分支 ) 。 广泛的实验显示, 我们的方法超过了合成和真实 LLVEVE 数据集的现有低光视频或单一图像增强方法。