Graph neural networks (GNNs) have received massive attention in the field of machine learning on graphs. Inspired by the success of neural networks, a line of research has been conducted to train GNNs to deal with various tasks, such as node classification, graph classification, and link prediction. In this work, our task of interest is graph classification. Several GNN models have been proposed and shown great accuracy in this task. However, the question is whether usual training methods fully realize the capacity of the GNN models. In this work, we propose a two-stage training framework based on triplet loss. In the first stage, GNN is trained to map each graph to a Euclidean-space vector so that graphs of the same class are close while those of different classes are mapped far apart. Once graphs are well-separated based on labels, a classifier is trained to distinguish between different classes. This method is generic in the sense that it is compatible with any GNN model. By adapting five GNN models to our method, we demonstrate the consistent improvement in accuracy and utilization of each GNN's allocated capacity over the original training method of each model up to 5.4\% points in 12 datasets.
翻译:由于神经网络的成功,在对神经网络的成功启发下,开展了一线研究,培训GNN处理各种任务,例如节点分类、图形分类和链接预测。在这项工作中,我们感兴趣的任务是图形分类。提出了若干GNN模型,并显示了这项任务的高度准确性。然而,问题是,通常的培训方法是否完全实现GNN模型的能力。在这项工作中,我们提议了一个基于三重损失的两阶段培训框架。在第一阶段,GNN受过培训,将每个图表绘制到Euclidean-空间矢量上,以便同一类的图表接近不同类别的图表,而不同类的图表则远为不同。一旦图表以标签为基础完全分离,就训练一个GNN模型区分不同类别。这种方法是通用的,因为它与任何GNN模型相兼容。根据我们的方法调整了五个GNN模型,我们展示了每个GNN模型的精确度和使用率和每个GNN模型配置能力相对于每个模型5.4的原始模型的精确度是否一致。