This paper presents a dual camera system for high spatiotemporal resolution (HSTR) video acquisition, where one camera shoots a video with high spatial resolution and low frame rate (HSR-LFR) and another one captures a low spatial resolution and high frame rate (LSR-HFR) video. Our main goal is to combine videos from LSR-HFR and HSR-LFR cameras to create an HSTR video. We propose an end-to-end learning framework, AWnet, mainly consisting of a FlowNet and a FusionNet that learn an adaptive weighting function in pixel domain to combine inputs in a frame recurrent fashion. To improve the reconstruction quality for cameras used in reality, we also introduce noise regularization under the same framework. Our method has demonstrated noticeable performance gains in terms of both objective PSNR measurement in simulation with different publicly available video and light-field datasets and subjective evaluation with real data captured by dual iPhone 7 and Grasshopper3 cameras. Ablation studies are further conducted to investigate and explore various aspects (such as reference structure, camera parallax, exposure time, etc) of our system to fully understand its capability for potential applications.
翻译:本文介绍了一种双摄像系统,用于获取时空高分辨率(HSTR)视频,其中一台相机拍摄空间分辨率高和框架率低的视频(HSR-LFR),另一台相机拍摄空间分辨率低和框架率高的视频(LSR-HFR),我们的主要目标是将LSR-HFR和HSR-LFR相机的视频结合起来,以创建HSTR视频。我们提议一个端对端学习框架,AWnet,主要包括一个流动网络和一个组合网,在像素域域中学习适应加权功能,将输入组合成一个经常框架。为了提高在现实中使用的相机的重建质量,我们还在同一框架内引入了噪音规范化。我们的方法显示,在利用不同的公开视频和光场数据集进行模拟时,在客观的PSRRR测量和主观评价方面都取得了显著的成绩,并用双型iPhone 7 和 srasseper3 相机采集的真数据进行了主观评价。还进行了进一步的研究,以调查并探索我们系统各方面(例如参考结构、摄像像平面反射器、暴露时间等),以充分理解其潜在应用能力。