Spectral graph convolutional networks (GCNs) are particular deep models which aim at extending neural networks to arbitrary irregular domains. The principle of these networks consists in projecting graph signals using the eigen-decomposition of their Laplacians, then achieving filtering in the spectral domain prior to back-project the resulting filtered signals onto the input graph domain. However, the success of these operations is highly dependent on the relevance of the used Laplacians which are mostly handcrafted and this makes GCNs clearly sub-optimal. In this paper, we introduce a novel spectral GCN that learns not only the usual convolutional parameters but also the Laplacian operators. The latter are designed "end-to-end" as a part of a recursive Chebyshev decomposition with the particularity of conveying both the differential and the non-differential properties of the learned representations -- with increasing order and discrimination power -- without overparametrizing the trained GCNs. Extensive experiments, conducted on the challenging task of skeleton-based action recognition, show the generalization ability and the outperformance of our proposed Laplacian design w.r.t. different baselines (built upon handcrafted and other learned Laplacians) as well as the related work.
翻译:这些网络的原则包括使用其激光器的光子分解参数来投射图形信号,然后在回射前在光谱域中实现过滤,然后在输入图域上进行过滤信号。然而,这些操作的成功在很大程度上取决于用过的激光器的相关性,这些光谱器大多是手工制作的,这使得GCN明显处于次最佳状态。在本文中,我们引入了一个新的光谱光谱GCN,不仅学习通常的共变参数,而且还学习拉普拉西操作器。后者设计成“端到端”作为循环的Chebyshev分解的一部分,其特殊性是传递所学的图像的差别和无差别性能 -- -- 其秩序和差别的力量日益增强 -- -- 而不过分平衡受过训练的GCN。在基于骨架的行动识别这一具有挑战性的任务上进行了广泛的实验,展示了“端到端”的能力,并展示了我们作为相关设计基础的Laqual 的模型,以及作为我们所学习的其他基准。