Video deblurring is a highly ill-posed problem due to the loss of motion information in the blur degradation process. Since event cameras can capture apparent motion with a high temporal resolution, several attempts have explored the potential of events for guiding video deblurring. These methods generally assume that the exposure time is the same as the reciprocal of the video frame rate. However,this is not true in real situations, and the exposure time might be unknown and dynamically varies depending on the video shooting environment(e.g., illumination condition). In this paper, we address the event-guided video deblurring assuming dynamically variable unknown exposure time of the frame-based camera. To this end, we first derive a new formulation for event-guided video deblurring by considering the exposure and readout time in the video frame acquisition process. We then propose a novel end-toend learning framework for event-guided video deblurring. In particular, we design a novel Exposure Time-based Event Selection(ETES) module to selectively use event features by estimating the cross-modal correlation between the features from blurred frames and the events. Moreover, we propose a feature fusion module to effectively fuse the selected features from events and blur frames. We conduct extensive experiments on various datasets and demonstrate that our method achieves state-of-the-art performance. Our project code and pretrained models will be available.
翻译:由于在模糊的降解过程中丧失了运动信息,视频解泡是一个非常糟糕的问题。由于事件相机能够以高时间分辨率捕捉表面运动,一些尝试已经探索了指导视频脱泡事件的可能性。这些方法一般假定曝光时间与视频框架率的对应时间相同。然而,在真实情况下情况并非如此,接触时间可能并不为人所知,而且根据视频拍摄环境(例如,照明条件)的不同而变化很大。在本文中,我们处理事件制导视频脱泡,假设基于框架的相机的动态变化不定的曝光时间。为此,我们首先通过考虑视频框架获取过程中的曝光和读出时间,为事件制导视频脱色事件制作新的配方。我们然后为事件制导录录制的视频布局提出了一个新的端点学习框架。特别是,我们设计了一个新型的曝光时间前事件选择模块(ETES)来选择使用事件特征,通过从模糊的框框框和场景的场景和场景事件来估计跨模式的关联性关系。此外,我们建议从模糊的框框框框和我们所选的模型将有效地展示我们所选的模型。