With the development of the 3D data acquisition facilities, the increasing scale of acquired 3D point clouds poses a challenge to the existing data compression techniques. Although promising performance has been achieved in static point cloud compression, it remains under-explored and challenging to leverage temporal correlations within a point cloud sequence for effective dynamic point cloud compression. In this paper, we study the attribute (e.g., color) compression of dynamic point clouds and present a learning-based framework, termed 4DAC. To reduce temporal redundancy within data, we first build the 3D motion estimation and motion compensation modules with deep neural networks. Then, the attribute residuals produced by the motion compensation component are encoded by the region adaptive hierarchical transform into residual coefficients. In addition, we also propose a deep conditional entropy model to estimate the probability distribution of the transformed coefficients, by incorporating temporal context from consecutive point clouds and the motion estimation/compensation modules. Finally, the data stream is losslessly entropy coded with the predicted distribution. Extensive experiments on several public datasets demonstrate the superior compression performance of the proposed approach.
翻译:随着3D数据采集设施的开发,获得的3D点云的规模不断扩大,对现有数据压缩技术构成了挑战。虽然在静点云压缩方面已经取得了有希望的绩效,但在利用点云序列中的时间相关性以有效动态点云压缩方面仍然探索不足和具有挑战性。在本文件中,我们研究了动态点云的属性(如颜色)压缩,并提出了一个学习框架,称为4DAC。为了减少数据中的时间冗余,我们首先用深神经网络构建了3D运动估计和运动补偿模块。然后,运动补偿部分产生的属性残余由区域适应性等级转换为残余系数而编码。此外,我们还提出了一个深度的有条件的诱变模型,通过纳入连续点云的时间背景和运动估计/调整模块来估计变化系数的概率分布。最后,数据流与预测的分布是无损的。在几个公共数据集上进行了广泛的实验,展示了拟议方法的高级压缩效果。