In this paper, we propose a two-stage deep learning framework called VoxelContext-Net for both static and dynamic point cloud compression. Taking advantages of both octree based methods and voxel based schemes, our approach employs the voxel context to compress the octree structured data. Specifically, we first extract the local voxel representation that encodes the spatial neighbouring context information for each node in the constructed octree. Then, in the entropy coding stage, we propose a voxel context based deep entropy model to compress the symbols of non-leaf nodes in a lossless way. Furthermore, for dynamic point cloud compression, we additionally introduce the local voxel representations from the temporal neighbouring point clouds to exploit temporal dependency. More importantly, to alleviate the distortion from the octree construction procedure, we propose a voxel context based 3D coordinate refinement method to produce more accurate reconstructed point cloud at the decoder side, which is applicable to both static and dynamic point cloud compression. The comprehensive experiments on both static and dynamic point cloud benchmark datasets(e.g., ScanNet and Semantic KITTI) clearly demonstrate the effectiveness of our newly proposed method VoxelContext-Net for 3D point cloud geometry compression.
翻译:在本文中, 我们提出一个名为 VoxelContext- Net 的两阶段深深学习框架, 用于静态和动态点云压缩。 利用基于奥克特里的方法和基于 voxel 的计划的优势, 我们的方法使用 voxel 环境来压缩奥克特里结构的数据。 具体地说, 我们首先提取本地 voxel 代表法, 将建造的奥克特里的每个节点的空间相邻背景信息编码起来。 然后, 在 entropy 编码阶段, 我们提出一个基于 voxel 环境的深点星系模型, 以无损的方式压缩非叶节点节点的符号。 此外, 对于 动态点云层压缩, 我们还从时间相邻的云层中引入了本地 voxel 表达法, 以利用时间依赖性。 更重要的是, 为了减轻奥氏树构造程序的扭曲性, 我们提出了基于 3D 的 vox 协调改进方法, 以产生更精确的重建点云端,, 适用于静态和动态点云压缩。 关于固定点云基基准数据集( e. g. scirnetNet 3Semblexliflextixxx- ) 的方法 。