With the help of the deep learning paradigm, many point cloud networks have been invented for visual analysis. However, there is great potential for development of these networks since the given information of point cloud data has not been fully exploited. To improve the effectiveness of existing networks in analyzing point cloud data, we propose a plug-and-play module, PnP-3D, aiming to refine the fundamental point cloud feature representations by involving more local context and global bilinear response from explicit 3D space and implicit feature space. To thoroughly evaluate our approach, we conduct experiments on three standard point cloud analysis tasks, including classification, semantic segmentation, and object detection, where we select three state-of-the-art networks from each task for evaluation. Serving as a plug-and-play module, PnP-3D can significantly boost the performances of established networks. In addition to achieving state-of-the-art results on four widely used point cloud benchmarks, we present comprehensive ablation studies and visualizations to demonstrate our approach's advantages. The code will be available at https://github.com/ShiQiu0419/pnp-3d.
翻译:在深层学习范式的帮助下,已经发明了许多点云网络进行视觉分析,然而,由于未充分利用点云数据的信息,这些网络的发展潜力很大。为了提高现有网络分析点云数据的效率,我们提议了一个插插和播放模块,即PnP-3D,目的是通过从明确的3D空间和隐含的特征空间中让更多的地方背景和全球双线反应来完善基本点云特征的表达方式。为了彻底评估我们的方法,我们进行了三个标准点云分析任务的实验,包括分类、语义分解和对象探测,我们从每项评估任务中选择了三个最先进的网络。作为插和播放模块,PnP-3D可以大大提升已建立网络的性能。除了在四种广泛使用的点云基准上实现最新结果外,我们还进行了全面的模拟研究和可视化,以展示我们的方法的优势。该代码将在https://github.com/ShiQu0419/pn3d上查阅。