Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks. Despite its powerful capacity to learn and generalize from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep, which limit the model scalability. In this work, we propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism, \ie node self-attention, neighborhood attention, and layer memory attention. We explain why the proposed attentive modules can improve GNN for few-shot learning with theoretical analysis and illustrations. Extensive experiments show that the proposed Attentive GNN outperforms the state-of-the-art GNN-based methods for few-shot learning over the mini-ImageNet and Tiered-ImageNet datasets, with both inductive and transductive settings.
翻译:神经网络图( GNN) 展示了许多具有挑战性应用的优异表现, 包括微弱的学习任务。 尽管GNN具有从少数样本中学习和概括的强大能力,但随着模型的深度,GNN通常会面临严重的超装和超动,这限制了模型的可缩放性。 在这项工作中,我们提出一个新的Attention GNN 来应对这些挑战,将三重关注机制(\ ie node selfonition)、邻里关注和层内存关注纳入其中。 我们解释了为什么提议的注意模块可以通过理论分析和插图来改进GNNN, 以便进行少见的学习。 广泛的实验显示, 拟议的Attentive GNNN 超越了在微型- Image Net 和 铁红- ImageNet 数据集( 既有感导环境,也有导导形环境) 上几发式学习的最先进的GNNNN 方法。