We present a novel lightweight convolutional neural network for point cloud analysis. In contrast to many current CNNs which increase receptive field by downsampling point cloud, our method directly operates on the entire point sets without sampling and achieves good performances efficiently. Our network consists of point voxel convolution (PVC) layer as building block. Each layer has two parallel branches, namely the voxel branch and the point branch. For the voxel branch specifically, we aggregate local features on non-empty voxel centers to reduce geometric information loss caused by voxelization, then apply volumetric convolutions to enhance local neighborhood geometry encoding. For the point branch, we use Multi-Layer Perceptron (MLP) to extract fine-detailed point-wise features. Outputs from these two branches are adaptively fused via a feature selection module. Moreover, we supervise the output from every PVC layer to learn different levels of semantic information. The final prediction is made by averaging all intermediate predictions. We demonstrate empirically that our method is able to achieve comparable results while being fast and memory efficient. We evaluate our method on popular point cloud datasets for object classification and semantic segmentation tasks.
翻译:我们提出了一个用于点云分析的新型轻量级进化神经网络。与目前许多通过下取样点云增加可接收场的CNN相比,我们的方法直接运行于整个点组,而没有取样,并取得了良好的性能。我们的网络由点 voxel 共进(PVC) 层组成,作为构件。每个层有两个平行的分支,即 voxel 分支和点分支。对于 voxel 分支,我们具体地将非空 voxel 中心的地方性能聚合在一起,以减少 voxeliz化造成的几何信息损失,然后应用体积变来增强本地周边的几何测量编码。对于点分支,我们使用多-Layer Percepron (MLMP) 来提取精细细的点对点特性。这两个分支的输出通过特性选择模块适应性地结合。 此外,我们监督每个PVC 层的输出,以学习不同层次的语义信息。最后的预测是通过平均所有中间预测进行。我们从经验上证明我们的方法能够在快速和记忆中实现可比较的结果。