Computationally removing the motion blur introduced by camera shake or object motion in a captured image remains a challenging task in computational photography. Deblurring methods are often limited by the fixed global exposure time of the image capture process. The post-processing algorithm either must deblur a longer exposure that contains relatively little noise or denoise a short exposure that intentionally removes the opportunity for blur at the cost of increased noise. We present a novel approach of leveraging spatially varying pixel exposures for motion deblurring using next-generation focal-plane sensor--processors along with an end-to-end design of these exposures and a machine learning--based motion-deblurring framework. We demonstrate in simulation and a physical prototype that learned spatially varying pixel exposures (L-SVPE) can successfully deblur scenes while recovering high frequency detail. Our work illustrates the promising role that focal-plane sensor--processors can play in the future of computational imaging.
翻译:计算摄影中,通过相机摇动或被捕获图像中的物体运动而引入的模糊动作的计算去掉这一运动,仍然是一项具有挑战性的任务。稀释方法往往受到固定的全球图像捕捉过程的固定接触时间的限制。后处理算法要么必须去除较长的接触时间,该接触时间相对较小,噪音相对较小,要么避免短的接触时间,以噪音增加为代价,故意去除模糊机会。我们提出了一个新办法,即利用空间上差异的像素接触,利用下一代的焦点机传感器处理器进行运动脱钩,同时设计这些照射的端到端设计,以及一个基于机器学习的动作分解框架。我们在模拟和物理原型中展示了在恢复高频率细节的同时,可以成功去除像素片片片片片片片片。我们的工作展示了在计算成像的未来可发挥的有益作用。