The recent development of dynamic point clouds has introduced the possibility of mimicking natural reality, and greatly assisting quality of life. However, to broadcast successfully, the dynamic point clouds require higher compression due to their huge volume of data compared to the traditional video. Recently, MPEG finalized a Video-based Point Cloud Compression standard known as V-PCC. However, V-PCC requires huge computational time due to expensive normal calculation and segmentation, sacrifices some points to limit the number of 2D patches, and cannot occupy all spaces in the 2D frame. The proposed method addresses these limitations by using a novel cross-sectional approach. This approach reduces expensive normal estimation and segmentation, retains more points, and utilizes more spaces for 2D frame generation compared to the VPCC. The experimental results using standard video sequences show that the proposed technique can achieve better compression in both geometric and texture data compared to the V-PCC standard.
翻译:最近,动态点云的发展带来了模仿自然现实的可能性,极大地帮助了生活质量。然而,如果成功广播,动态点云需要更高的压缩,因为与传统视频相比数据量巨大。最近,MPEG最终确定了一个基于视频的点云压缩标准,称为V-PCC。然而,V-PCC由于正常计算和分割费用昂贵,需要大量计算时间,为限制2D补丁数目牺牲了一些点,无法占据2D框架的所有空间。拟议方法通过采用新的跨部门方法解决这些局限性。这种方法减少了昂贵的正常估计和截断,保留了更多的点,并且比VPCC为2D框架生成利用了更多的空间。 使用标准视频序列的实验结果表明,与V-PCC标准相比,拟议的技术可以在几何和质数据方面实现更好的压缩。