Graph is a highly generic and diverse representation, suitable for almost any data processing problem. Spectral graph theory has been shown to provide powerful algorithms, backed by solid linear algebra theory. It thus can be extremely instrumental to design deep network building blocks with spectral graph characteristics. For instance, such a network allows the design of optimal graphs for certain tasks or obtaining a canonical orthogonal low-dimensional embedding of the data. Recent attempts to solve this problem were based on minimizing Rayleigh-quotient type losses. We propose a different approach of directly learning the eigensapce. A severe problem of the direct approach, applied in batch-learning, is the inconsistent mapping of features to eigenspace coordinates in different batches. We analyze the degrees of freedom of learning this task using batches and propose a stable alignment mechanism that can work both with batch changes and with graph-metric changes. We show that our learnt spectral embedding is better in terms of NMI, ACC, Grassman distance, orthogonality and classification accuracy, compared to SOTA. In addition, the learning is more stable.
翻译:图形是一种非常通用和多样化的表示形式,适用于几乎所有的数据处理问题。谱图论已经证明可以提供强大的算法,支持实线性代数理论。 因此,设计具有谱图特征的深度网络构建模块可以非常有益。例如,这样的网络允许为特定任务设计最优图形或获得数据的规范正交低维嵌入。 解决这个问题的最近尝试基于最小化瑞利商类型的损失。 我们提出了一种不同的方法,直接学习本征空间。批次学习中直接方法严重问题是在不同批次中将特征映射到本征空间坐标的不一致性。我们分析批处理中学习此任务的自由度,并提出一种稳定的对齐机制,可以处理批更改和图形度量更改。我们显示出,与SOTA相比,我们学习的谱嵌入更好地表现NMI,ACC,Grassman距离,正交性和分类准确性。此外,学习更加稳定。