Learning-based methods have enabled the recovery of a video sequence from a single motion-blurred image or a single coded exposure image. Recovering video from a single motion-blurred image is a very ill-posed problem and the recovered video usually has many artifacts. In addition to this, the direction of motion is lost and it results in motion ambiguity. However, it has the advantage of fully preserving the information in the static parts of the scene. The traditional coded exposure framework is better-posed but it only samples a fraction of the space-time volume, which is at best 50% of the space-time volume. Here, we propose to use the complementary information present in the fully-exposed (blurred) image along with the coded exposure image to recover a high fidelity video without any motion ambiguity. Our framework consists of a shared encoder followed by an attention module to selectively combine the spatial information from the fully-exposed image with the temporal information from the coded image, which is then super-resolved to recover a non-ambiguous high-quality video. The input to our algorithm is a fully-exposed and coded image pair. Such an acquisition system already exists in the form of a Coded-two-bucket (C2B) camera. We demonstrate that our proposed deep learning approach using blurred-coded image pair produces much better results than those from just a blurred image or just a coded image.
翻译:以学习为基础的方法使得能够从单一的运动模糊图像或单一的编码暴露图像中恢复视频序列。 从单一的运动模糊图像或单一的编码暴露图像中恢复视频序列成为非常糟糕的问题,从单一的移动模糊图像中恢复视频是一个非常糟糕的问题,回收的视频通常有许多工艺品。除此之外,运动的方向已经丢失,并导致运动的模糊性。然而,传统的编码暴露框架的优点是完全保存在现场静态部分中的信息。传统的编码暴露框架是更好地保存的,但它只是样板空间时间量的一小部分,最多是空间时间量的50 % 。在这里,我们提议使用完全暴露(bloberred)图像中的辅助信息,同时使用加密(blobred)图像中的辅助信息,在没有任何动作模糊的图像中恢复一个高度忠诚的视频。我们的框架是一个共享的编码,然后有一个关注模块,有选择地将完全暴露的图像中的空间信息与来自编码的时空信息结合起来,然后是超级解析,然后是恢复一个非模糊的高级高清晰度高清晰度的图像。我们提议的代码的编码格式的输入系统,现在已经存在了一种完全的编码。