Motion deblurring is a highly ill-posed problem due to the loss of motion information in the blur degradation process. Since event cameras can capture apparent motion with a high temporal resolution, several attempts have explored the potential of events for guiding deblurring. These methods generally assume that the exposure time is the same as the reciprocal of the video frame rate. However, this is not true in real situations, and the exposure time might be unknown and dynamically varies depending on the video shooting environment(e.g., illumination condition). In this paper, we address the event-guided motion deblurring assuming dynamically variable unknown exposure time of the frame-based camera. To this end, we first derive a new formulation for event-guided motion deblurring by considering the exposure and readout time in the video frame acquisition process. We then propose a novel end-to-end learning framework for event-guided motion deblurring. In particular, we design a novel Exposure Time-based Event Selection(ETES) module to selectively use event features by estimating the cross-modal correlation between the features from blurred frames and the events. Moreover, we propose a feature fusion module to fuse the selected features from events and blur frames effectively. We conduct extensive experiments on various datasets and demonstrate that our method achieves state-of-the-art performance.
翻译:由于在模糊的降解过程中失去了运动信息,因此,运动变形是一个非常糟糕的问题。由于事件相机能够以高时间分辨率捕捉表面运动,因此几次尝试探索了各种事件的可能性,以引导变形。这些方法一般假定曝光时间与视频框架率的对等时间相同。然而,在真实情况下情况并非如此,接触时间可能并不为人所知,而且根据视频拍摄环境(例如,照明条件)的不同而变化很大。在本文中,我们讨论事件引导运动变幻莫测,假设以框架为基础的相机的动态变化不明的曝光时间。为此,我们首先通过考虑视频框架获取过程中的曝光和读出时间,为事件变形运动变形产生新的配方。我们然后为事件受控运动变形提出一个新的端对端学习框架。特别是,我们设计了一个新型的暴露时间选择事件选择模块,通过估计从模糊框架和事件变形的特征与我们所选择的变形模型,从而有效地展示了我们所选择的状态模式。