We introduce PointConvFormer, a novel building block for point cloud based deep neural network architectures. Inspired by generalization theory, PointConvFormer combines ideas from point convolution, where filter weights are only based on relative position, and Transformers which utilizes feature-based attention. In PointConvFormer, feature difference between points in the neighborhood serves as an indicator to re-weight the convolutional weights. Hence, we preserved the invariances from the point convolution operation whereas attention is used to select relevant points in the neighborhood for convolution. To validate the effectiveness of PointConvFormer, we experiment on both semantic segmentation and scene flow estimation tasks on point clouds with multiple datasets including ScanNet, SemanticKitti, FlyingThings3D and KITTI. Our results show that PointConvFormer substantially outperforms classic convolutions, regular transformers, and voxelized sparse convolution approaches with smaller, more computationally efficient networks. Visualizations show that PointConvFormer performs similarly to convolution on flat surfaces, whereas the neighborhood selection effect is stronger on object boundaries, showing that it got the best of both worlds.
翻译:我们引入了点ConvFormer, 这是基于点云的基于深神经网络结构的新构件。 在一般化理论的启发下, 点ConvFormer 将来自点变异的观点组合在一起, 过滤器重量仅以相对位置为基础, 以及使用基于特性的注意的变异器。 在点ConvFormer 中, 社区各点之间的差别是重估卷变的指数。 因此, 我们保留了点变异功能, 而注意力被用来选择周围的相切点。 为了验证点Convormer 的有效性, 我们实验了点Convormer 在点云上的语义分割和场景流估计任务, 包括扫描网、 Semantic Kitti、 FlightTHings3D 和 KITTI 。 我们的结果表明, 点Convormer 明显超越了典型的变异器、 常规变异体以及以更小、更计算高效的网络的杂交式稀变法方法。 视觉化显示, 点Convormer 在平面上都有类似的演变力, 而相选择效果在天界上都得到了最好的效果。