This paper presents a new approach for assembling graph neural networks based on framelet transforms. The latter provides a multi-scale representation for graph-structured data. With the framelet system, we can decompose the graph feature into low-pass and high-pass frequencies as extracted features for network training, which then defines a framelet-based graph convolution. The framelet decomposition naturally induces a graph pooling strategy by aggregating the graph feature into low-pass and high-pass spectra, which considers both the feature values and geometry of the graph data and conserves the total information. The graph neural networks with the proposed framelet convolution and pooling achieve state-of-the-art performance in many types of node and graph prediction tasks. Moreover, we propose shrinkage as a new activation for the framelet convolution, which thresholds the high-frequency information at different scales. Compared to ReLU, shrinkage in framelet convolution improves the graph neural network model in terms of denoising and signal compression: noises in both node and structure can be significantly reduced by accurately cutting off the high-pass coefficients from framelet decomposition, and the signal can be compressed to less than half its original size with the prediction performance well preserved.
翻译:本文展示了基于框架变换的图形神经网络的组合新方法。 后者为图形结构数据提供了一个多尺度的表达面。 有了框架网系统, 我们可以将图形特征分解成低通道和高通道频率, 作为网络培训的提取特征, 从而定义基于框架的图形变化。 框架网分解自然地通过将图形特征合并到低通道和高通道光谱中, 将图形特征合并到低通道和高通道光谱中, 既考虑图形数据的特征值和几何, 也保存了全部信息。 与拟议框架相联和集合的图形神经网络可以在许多类型的节点和图形预测任务中实现最先进的性能。 此外, 我们提议将缩小作为框架图变异性的新激活, 将高频信息定在不同的尺度上。 与 ReLU 相比, 框架变现的缩缩放改进了图形神经网络模型的解调和信号压缩: 节点和结构中的噪音都可以通过精确地将高通道系数从最初的基质变变变变变小到最慢的图像变小状态, 。