The non-uniformly distributed nature of the 3D dynamic point cloud (DPC) brings significant challenges to its high-efficient inter-frame compression. This paper proposes a novel 3D sparse convolution-based Deep Dynamic Point Cloud Compression (D-DPCC) network to compensate and compress the DPC geometry with 3D motion estimation and motion compensation in the feature space. In the proposed D-DPCC network, we design a {\it Multi-scale Motion Fusion} (MMF) module to accurately estimate the 3D optical flow between the feature representations of adjacent point cloud frames. Specifically, we utilize a 3D sparse convolution-based encoder to obtain the latent representation for motion estimation in the feature space and introduce the proposed MMF module for fused 3D motion embedding. Besides, for motion compensation, we propose a 3D {\it Adaptively Weighted Interpolation} (3DAWI) algorithm with a penalty coefficient to adaptively decrease the impact of distant neighbors. We compress the motion embedding and the residual with a lossy autoencoder-based network. To our knowledge, this paper is the first work proposing an end-to-end deep dynamic point cloud compression framework. The experimental result shows that the proposed D-DPCC framework achieves an average 76\% BD-Rate (Bjontegaard Delta Rate) gains against state-of-the-art Video-based Point Cloud Compression (V-PCC) v13 in inter mode.
翻译:3D 动态点云(DPC) 不统一分布的3D 动态点云(DPC) 模块给高效率的跨框架压缩带来了巨大的挑战。 本文建议了一个新的 3D 分散的动态点深动力云压缩(D- DPCC) 网络, 以3D 动作估计和动作补偿来补偿和压缩 DPC 的几何。 在拟议的 D- DPCC 网络中, 我们设计了一个包含一个惩罚系数的 3D 多重移动组合模块, 以精确估计相邻点云框架特征显示的3D光学流。 具体地说, 我们使用一个 3D 分散的基于聚合点的编码, 以获得功能空间内运动估计的潜在代表, 并引入了3D 动作嵌入的 MMF 模块。 此外, 为了运动补偿, 我们提出了一个 3D- 3D 调和 调调调调调的调调调调调调调调调调调调调的调和降低远程邻居的影响的算法。 我们将运动的调调调调调调调调调调调调式的调式的调制和剩余与以丢失的自动co德网络为基网络。 对于我们的知识而言, D- D- D- 平- 平- 平- 平调调调调平- 平调制的平调调制的平- 格式框架