The ever-increasing 3D application makes the point cloud compression unprecedentedly important and needed. In this paper, we propose a patch-based compression process using deep learning, focusing on the lossy point cloud geometry compression. Unlike existing point cloud compression networks, which apply feature extraction and reconstruction on the entire point cloud, we divide the point cloud into patches and compress each patch independently. In the decoding process, we finally assemble the decompressed patches into a complete point cloud. In addition, we train our network by a patch-to-patch criterion, i.e., use the local reconstruction loss for optimization, to approximate the global reconstruction optimality. Our method outperforms the state-of-the-art in terms of rate-distortion performance, especially at low bitrates. Moreover, the compression process we proposed can guarantee to generate the same number of points as the input. The network model of this method can be easily applied to other point cloud reconstruction problems, such as upsampling.
翻译:不断增加的 3D 应用程序使得点云压缩变得空前重要和必要。 在本文中, 我们提议使用深度学习进行基于补丁的压缩过程, 重点是丢失点云的几何压缩。 与现有的点云压缩网络不同, 将点云分为补丁, 并独立压缩每个补丁。 在解码过程中, 我们最终将解压缩的补丁组成完整的点云。 此外, 我们用补丁标准来训练我们的网络, 即使用本地重建损失来优化, 以近似全球重建的最佳性 。 我们的方法在速度扭曲性能方面超越了最新水平, 特别是在低位速率方面。 此外, 我们提议的压缩过程可以保证生成与输入相同的点数。 这种方法的网络模型可以很容易地应用到其他点云重建问题, 比如抽查。