Great progress has been made in point cloud classification with learning-based methods. However, complex scene and sensor inaccuracy in real-world application make point cloud data suffer from corruptions, such as occlusion, noise and outliers. In this work, we propose Point-Voxel based Adaptive (PV-Ada) feature abstraction for robust point cloud classification under various corruptions. Specifically, the proposed framework iteratively voxelize the point cloud and extract point-voxel feature with shared local encoding and Transformer. Then, adaptive max-pooling is proposed to robustly aggregate the point cloud feature for classification. Experiments on ModelNet-C dataset demonstrate that PV-Ada outperforms the state-of-the-art methods. In particular, we rank the $2^{nd}$ place in ModelNet-C classification track of PointCloud-C Challenge 2022, with Overall Accuracy (OA) being 0.865. Code will be available at https://github.com/zhulf0804/PV-Ada.
翻译:以学习为基础的方法在点云分类方面取得了巨大进展,然而,在现实世界应用中,复杂的场景和传感器不准确使点云数据出现腐败,例如隐蔽性、噪音和外星。在这项工作中,我们提议将基于点-Voxel的适应性(PV-Ada)特征抽象化,以便在各种腐败下进行稳健的点云分类。具体地说,拟议框架迭代地将点云和点-voxel特性与共同的本地编码和变异器混为一谈。然后,建议适应性最大集合将点云特性有力地汇总起来,用于分类。模型网-C数据集的实验表明,PV-Ada优于最先进的方法。特别是,我们在2022年点-C挑战的模型网-C分类轨道中将2美元的位置排在2 ⁇ nd,总体准确性(OA)为0.865。代码将在https://github.com/zhulf0804/PV-Ada上查阅。