Stereo video retargeting aims to resize an image to a desired aspect ratio. The quality of retargeted videos can be significantly impacted by the stereo videos spatial, temporal, and disparity coherence, all of which can be impacted by the retargeting process. Due to the lack of a publicly accessible annotated dataset, there is little research on deep learning-based methods for stereo video retargeting. This paper proposes an unsupervised deep learning-based stereo video retargeting network. Our model first detects the salient objects and shifts and warps all objects such that it minimizes the distortion of the salient parts of the stereo frames. We use 1D convolution for shifting the salient objects and design a stereo video Transformer to assist the retargeting process. To train the network, we use the parallax attention mechanism to fuse the left and right views and feed the retargeted frames to a reconstruction module that reverses the retargeted frames to the input frames. Therefore, the network is trained in an unsupervised manner. Extensive qualitative and quantitative experiments and ablation studies on KITTI stereo 2012 and 2015 datasets demonstrate the efficiency of the proposed method over the existing state-of-the-art methods. The code is available at https://github.com/z65451/SVR/.
翻译:立体视频重定向旨在调整图像大小以达到所需的宽高比。视频重定向的质量可能受立体视频空间、时间和视差的一致性影响,而这些因素又可能受到重定向过程的影响。由于缺乏公开的带标注数据集,关于基于深度学习的立体视频重定向的研究很少。本文提出了一种基于无监督深度学习的立体视频重定向网络。我们的模型首先检测显著对象,并移动和变形所有对象,以最小化立体帧前景的扭曲程度。我们使用一维卷积来移动显著对象,并设计了立体视频变换器来辅助重定向过程。为了训练网络,我们使用视差注意机制来融合左右视图,将重新定向的帧馈送到反转重定向后的输入帧的重建模块中。因此,网络的训练是无监督的。在KITTI立体2012和2015数据集上进行了广泛的定性和定量实验和消融研究,证明了所提出方法的效率超过了现有的最先进方法。代码可在https://github.com/z65451/SVR/上获取。