Graph Convolutional Networks (GCNs) have been widely demonstrated their powerful ability in graph data representation and learning. Existing graph convolution layers are mainly designed based on graph signal processing and transform aspect which usually suffer from some limitations, such as over-smoothing, over-squashing and non-robustness, etc. As we all know that Convolution Neural Networks (CNNs) have received great success in many computer vision and machine learning. One main aspect is that CNNs leverage many learnable convolution filters (kernels) to obtain rich feature descriptors and thus can have high capacity to encode complex patterns in visual data analysis. Also, CNNs are flexible in designing their network architecture, such as MobileNet, ResNet, Xception, etc. Therefore, it is natural to arise a question: can we design graph convolutional layer as flexibly as that in CNNs? Innovatively, in this paper, we consider connecting GCNs with CNNs deeply from a general perspective of depthwise separable convolution operation. Specifically, we show that GCN and GAT indeed perform some specific depthwise separable convolution operations. This novel interpretation enables us to better understand the connections between GCNs (GCN, GAT) and CNNs and further inspires us to design more Unified GCNs (UGCNs). As two showcases, we implement two UGCNs, i.e., Separable UGCN (S-UGCN) and General UGCN (G-UGCN) for graph data representation and learning. Promising experiments on several graph representation benchmarks demonstrate the effectiveness and advantages of the proposed UGCNs.
翻译:现有图象变异层的设计主要基于图形信号处理和变异方面,这些方面通常受到一些限制,例如超移动、超振和无紫外线等。众所周知,变动神经网络在许多计算机视觉和机器学习中取得了巨大成功。一个主要方面是CNN利用许多可学习的变异过滤器(内核)获取丰富的特征描述器,从而能够对视觉数据分析的复杂模式进行编码。此外,CNN在设计其网络结构时具有灵活性,例如移动网络、ResNet、Xception等。因此,自然会出现一个问题:我们能否设计变动神经网络(CNN)在许多计算机视觉和机器学习中都像那样灵活?在本文中,我们考虑将GCN与CN深入连接起来,以便从深度表达和分解操作的一般角度(GCNCN和GCN 网络确实能让我们的更深层次和GGG 的G-G-G-G更深层次解释。