Graph Convolutional Networks (GCNs) are widely used in a variety of applications, and can be seen as an unstructured version of standard Convolutional Neural Networks (CNNs). As in CNNs, the computational cost of GCNs for large input graphs (such as large point clouds or meshes) can be high and inhibit the use of these networks, especially in environments with low computational resources. To ease these costs, quantization can be applied to GCNs. However, aggressive quantization of the feature maps can lead to a significant degradation in performance. On a different note, Haar wavelet transforms are known to be one of the most effective and efficient approaches to compress signals. Therefore, instead of applying aggressive quantization to feature maps, we propose to utilize Haar wavelet compression and light quantization to reduce the computations and the bandwidth involved with the network. We demonstrate that this approach surpasses aggressive feature quantization by a significant margin, for a variety of problems ranging from node classification to point cloud classification and part and semantic segmentation.
翻译:在各种应用中广泛使用进化网络(GCNs),这可被视为是标准进化神经网络(CNNs)的无结构版本。与CNN一样,大型输入图形(如大点云或meshes)的GCNs计算成本可能很高,并抑制这些网络的使用,特别是在计算资源较少的环境中。为了减轻这些费用,可以对GCNs采用四分法。然而,对地貌图进行积极的量化可能导致性能显著退化。在另一个注解中,Haar波盘变换是压缩信号的最有效和最高效的方法之一。因此,我们提议使用Haar波列压缩和光度四分法来减少与网络有关的计算和带宽。我们证明,这一方法在从点云分类到点云分类、部分和语义分割等一系列问题上,超过了强进化特征四分法。