The challenge in learning from dynamic graphs for predictive tasks lies in extracting fine-grained temporal motifs from an ever-evolving graph. Moreover, task labels are often scarce, costly to obtain, and highly imbalanced for large dynamic graphs. Recent advances in self-supervised learning on graphs demonstrate great potential, but focus on static graphs. State-of-the-art (SoTA) models for dynamic graphs are not only incompatible with the self-supervised learning (SSL) paradigm but also fail to forecast interactions beyond the very near future. To address these limitations, we present DyG2Vec, an SSL-compatible, efficient model for representation learning on dynamic graphs. DyG2Vec uses a window-based mechanism to generate task-agnostic node embeddings that can be used to forecast future interactions. DyG2Vec significantly outperforms SoTA baselines on benchmark datasets for downstream tasks while only requiring a fraction of the training/inference time. We adapt two SSL evaluation mechanisms to make them applicable to dynamic graphs and thus show that SSL pre-training helps learn more robust temporal node representations, especially for scenarios with few labels.
翻译:从动态图表中学习预测性任务的挑战在于从一个不断演变的图表中提取细微刻度时间元素。 此外,任务标签往往稀缺、成本昂贵,而且对于大型动态图表来说高度不平衡。 图表上自监督学习的最近进展显示了巨大的潜力,但侧重于静态图表。 动态图表的状态(SoTA)模型不仅与自监督学习模式模式不相容,而且无法预测近期以外的互动。为了应对这些限制,我们提出了DyG2Vec,一个在动态图形上进行代表学习的兼容性高效模式。 DyG2Vec使用基于窗口的机制生成可用于预测未来互动的任务-敏感节点嵌入器。 DyG2Vec 显著地偏离了下游任务基准数据集的 SoTA基线,而只需要培训/推断时间的一小部分。我们调整了两个SLSL评估机制,使其在动态图形上具有兼容性、高效的演示模式。 DyG2Vec 使用基于窗口的机制来生成可用于动态图案前的动态图表,从而显示SLSL更强的图像。