Graphs are present in many real-world applications, such as financial fraud detection, commercial recommendation, and social network analysis. But given the high cost of graph annotation or labeling, we face a severe graph label-scarcity problem, i.e., a graph might have a few labeled nodes. One example of such a problem is the so-called \textit{few-shot node classification}. A predominant approach to this problem resorts to \textit{episodic meta-learning}. In this work, we challenge the status quo by asking a fundamental question whether meta-learning is a must for few-shot node classification tasks. We propose a new and simple framework under the standard few-shot node classification setting as an alternative to meta-learning to learn an effective graph encoder. The framework consists of supervised graph contrastive learning with novel mechanisms for data augmentation, subgraph encoding, and multi-scale contrast on graphs. Extensive experiments on three benchmark datasets (CoraFull, Reddit, Ogbn) show that the new framework significantly outperforms state-of-the-art meta-learning based methods.
翻译:许多真实世界应用程序中都存在图表,例如金融欺诈检测、商业建议和社会网络分析。但是,鉴于图表注释或标签的成本高昂,我们面临一个严重的图表标签偏差问题,即图表可能有几个标签节点。问题的一个例子是所谓的“textit{few-shot-node”分类。这个问题的主要方法是采用\textit{episodic metainess}。在这项工作中,我们质疑现状,询问一个基本问题,即元学习是否必须用于几发节点分类任务。我们在标准的微点节点分类设置下提出了一个新的和简单的框架,作为学习有效图形编码的替代方案。这个框架包括受监督的图形对比学习,以及数据增强、子图编码和图上多尺度对比的新机制。在三个基准数据集(CoraFull、Redddit、Ogbn)上的广泛实验显示,新的框架大大超越了基于元数据学习方法的状态。