This paper presents a new approach for assembling graph neural networks based on framelet transforms. The latter provides a multi-scale representation for graph-structured data. We decompose an input graph into low-pass and high-pass frequencies coefficients for network training, which then defines a framelet-based graph convolution. The framelet decomposition naturally induces a graph pooling strategy by aggregating the graph feature into low-pass and high-pass spectra, which considers both the feature values and geometry of the graph data and conserves the total information. The graph neural networks with the proposed framelet convolution and pooling achieve state-of-the-art performance in many node and graph prediction tasks. Moreover, we propose shrinkage as a new activation for the framelet convolution, which thresholds high-frequency information at different scales. Compared to ReLU, shrinkage activation improves model performance on denoising and signal compression: noises in both node and structure can be significantly reduced by accurately cutting off the high-pass coefficients from framelet decomposition, and the signal can be compressed to less than half its original size with well-preserved prediction performance.
翻译:本文介绍了基于框架变换的图形神经网络组装新方法。 后者为图形结构化数据提供了一个多尺度的表达面。 我们将一个输入图形分解成用于网络培训的低通道和高通道频率系数, 从而定义基于框架的图形演变。 框架分解自然会通过将图形特性集成到低通道和高通道光谱中, 将图形特性集成到低通道和高通道光谱中, 同时考虑到图形数据的特征值和几何, 并保存全部信息 。 带有拟议框架变换和汇集在许多节点和图形预测任务中达到最新性能的图形神经网络。 此外, 我们提议缩小为框架变换新激活, 以在不同尺度中设定高频率信息阈值。 与 ReLU 相比, 缩放激活会改善解调和信号压缩的模型性能: 将高通道和结构的噪音精确地从框架变异状态中切出高通道系数, 并且信号可以压缩到低于最初的半幅度, 并预设性能预测。