A motion-blurred image is the temporal average of multiple sharp frames over the exposure time. Recovering these sharp video frames from a single blurred image is nontrivial, due to not only its strong ill-posedness, but also various types of complex motion in reality such as rotation and motion in depth. In this work, we report a generalized video extraction method using the affine motion modeling, enabling to tackle multiple types of complex motion and their mixing. In its workflow, the moving objects are first segemented in the alpha channel. This allows separate recovery of different objects with different motion. Then, we reduce the variable space by modeling each video clip as a series of affine transformations of a reference frame, and introduce the $l0$-norm total variation regularization to attenuate the ringing artifact. The differentiable affine operators are employed to realize gradient-descent optimization of the affine model, which follows a novel coarse-to-fine strategy to further reduce artifacts. As a result, both the affine parameters and sharp reference image are retrieved. They are finally input into stepwise affine transformation to recover the sharp video frames. The stepwise retrieval maintains the nature to bypass the frame order ambiguity. Experiments on both public datasets and real captured data validate the state-of-the-art performance of the reported technique.
翻译:运动模糊的图像是曝光时间中多重锐利框架的时间平均值。 从单一模糊的图像中恢复这些锐利的视频框架是非边际的, 不仅因为其强烈的不正确性, 而且在现实中也由于各种类型的复杂动作, 如旋转和深度运动。 在这项工作中, 我们报告一种通用的视频提取方法, 使用折线运动模型, 能够处理多种复杂的运动及其混合。 在其工作流程中, 移动的物体首先在阿尔法频道中被分割。 这样可以分别恢复不同物体的动作。 然后, 我们通过将每个视频片段建模作为参照框架的折叠叠变系列, 来减少变量空间, 并且引入 美元- 美元- 调色调总变整, 以降低响动的工艺。 使用一种差异型的松动操作器, 来实现纤维模型的梯度- 优化, 遵循一种新式的粗度到线战略, 以进一步减少人工制品。 结果是, 亲切参数和直截的引用图像图像图像被回收。 它们最终被输入到一个快速的轨迹性模型。 它们被复制到真实的轨迹 。 它们被保存到精确的状态。 。 它们被复制到恢复为精确的精确的状态 。