In practical applications, multi-view data depicting objectives from assorted perspectives can facilitate the accuracy increase of learning algorithms. However, given multi-view data, there is limited work for learning discriminative node relationships and graph information simultaneously via graph convolutional network that has drawn the attention from considerable researchers in recent years. Most of existing methods only consider the weighted sum of adjacency matrices, yet a joint neural network of both feature and graph fusion is still under-explored. To cope with these issues, this paper proposes a joint deep learning framework called Learnable Graph Convolutional Network and Feature Fusion (LGCN-FF), consisting of two stages: feature fusion network and learnable graph convolutional network. The former aims to learn an underlying feature representation from heterogeneous views, while the latter explores a more discriminative graph fusion via learnable weights and a parametric activation function dubbed Differentiable Shrinkage Activation (DSA) function. The proposed LGCN-FF is validated to be superior to various state-of-the-art methods in multi-view semi-supervised classification.
翻译:在实际应用中,从各种角度描述目标的多视角数据可以促进学习算法的准确性提高,然而,鉴于多视角数据,通过近年来吸引大量研究人员注意的图层变迁网络同时学习歧视性节点关系和图形信息的工作有限,大多数现有方法仅考虑相邻矩阵的加权和总和,但地貌和图聚变的联合神经网络仍未得到充分探讨。为处理这些问题,本文件提议了一个称为可学习的图表革命网络和功能融合(LGCN-FF)的联合深层次学习框架,由两个阶段组成:特征融合网络和可学习的图层变网络。前者的目的是从不同观点中学习基本特征,而后者则通过可学习的重量和可读化的微量作用功能探索一种更具有歧视性的图形融合功能。拟议的LGCN-FF经过验证后,将优于多视角半监督分类中的各种状态方法。