LiDAR point cloud frame interpolation, which synthesizes the intermediate frame between the captured frames, has emerged as an important issue for many applications. Especially for reducing the amounts of point cloud transmission, it is by predicting the intermediate frame based on the reference frames to upsample data to high frame rate ones. However, due to high-dimensional and sparse characteristics of point clouds, it is more difficult to predict the intermediate frame for LiDAR point clouds than videos. In this paper, we propose a novel LiDAR point cloud frame interpolation method, which exploits range images (RIs) as an intermediate representation with CNNs to conduct the frame interpolation process. Considering the inherited characteristics of RIs differ from that of color images, we introduce spatially adaptive convolutions to extract range features adaptively, while a high-efficient flow estimation method is presented to generate optical flows. The proposed model then warps the input frames and range features, based on the optical flows to synthesize the interpolated frame. Extensive experiments on the KITTI dataset have clearly demonstrated that our method consistently achieves superior frame interpolation results with better perceptual quality to that of using state-of-the-art video frame interpolation methods. The proposed method could be integrated into any LiDAR point cloud compression systems for inter prediction.
翻译:LIDAR点云框架插图,它综合了被捕获框架之间的中间框架,已成为许多应用中的一个重要问题。特别是对于减少点云传输量来说,它是通过预测基于上层抽样数据的参考框架的中间框架到高框架速率的中间框架。然而,由于点云的高度和稀疏特点,更难预测LIDAR点云的中间框架,而不是视频。在本文中,我们提出了一个新的LIDAR点云框架插图方法,它利用范围图像作为CNN进行框架内插过程的中间代表。考虑到RI的遗传特征不同于彩色图像的遗传特征,我们引入空间适应性适应性变化以提取范围特征,同时提出高效的流估计方法来生成光学流。拟议的模型随后根据光流对输入框架和范围特征进行扭曲,在光流的基础上对集成框架进行综合。KITTI数据集的广泛实验清楚地表明,我们的方法始终能以更好的视野框架间插图结果与彩度不同。我们采用任何设想性模型,可以使用任何设想式的图像框架系统,从而采用更好的图像间定位系统。