Meta-learning has received a tremendous recent attention as a possible approach for mimicking human intelligence, i.e., acquiring new knowledge and skills with little or even no demonstration. Most of the existing meta-learning methods are proposed to tackle few-shot learning problems such as image and text, in rather Euclidean domain. However, there are very few works applying meta-learning to non-Euclidean domains, and the recently proposed graph neural networks (GNNs) models do not perform effectively on graph few-shot learning problems. Towards this, we propose a novel graph meta-learning framework -- Meta-GNN -- to tackle the few-shot node classification problem in graph meta-learning settings. It obtains the prior knowledge of classifiers by training on many similar few-shot learning tasks and then classifies the nodes from new classes with only few labeled samples. Additionally, Meta-GNN is a general model that can be straightforwardly incorporated into any existing state-of-the-art GNN. Our experiments conducted on three benchmark datasets demonstrate that our proposed approach not only improves the node classification performance by a large margin on few-shot learning problems in meta-learning paradigm, but also learns a more general and flexible model for task adaption.
翻译:作为模拟人类智能的一种可能办法,元学习最近受到极大的关注,作为模拟人类智能的一种可能办法,即获得新的知识和技能,很少甚至没有示范,大多数现有的元学习方法都建议解决微小的学习问题,例如图象和文字,而不是欧几里德域;然而,将元学习应用于非欧几里德域的工作很少,最近提议的图形神经网络模型在图示少见的学习问题上不能有效发挥作用。为此,我们提出了一个新的图表元学习框架 -- -- Meta-GNN -- -- 以解决图表元学习环境中的微小节点分类问题。它通过培训许多类似的微小学习任务,获得分类人员先前的知识,然后将新班的节点分类,只有很少贴标签的样本。此外,Meta-GNNN是一个一般模型,可以直接纳入任何现有的最新工艺的GNNN。我们在三个基准数据集上进行的实验表明,我们拟议的方法不仅改进了节点分类的成绩,而且还通过大幅度的模型学习,在微小的学习中,通过一个大差的模型学习问题来改进。