The inductive biases of graph representation learning algorithms are often encoded in the background geometry of their embedding space. In this paper, we show that general directed graphs can be effectively represented by an embedding model that combines three components: a pseudo-Riemannian metric structure, a non-trivial global topology, and a unique likelihood function that explicitly incorporates a preferred direction in embedding space. We demonstrate the representational capabilities of this method by applying it to the task of link prediction on a series of synthetic and real directed graphs from natural language applications and biology. In particular, we show that low-dimensional cylindrical Minkowski and anti-de Sitter spacetimes can produce equal or better graph representations than curved Riemannian manifolds of higher dimensions.
翻译:图形代表式学习算法的内在偏差往往被融入其嵌入空间的背景几何结构中。 在本文中,我们表明一般定向图表可以通过嵌入模型有效体现,嵌入模型包括三个组成部分:假瑞曼度结构、非三边式全球地形学,以及明确纳入嵌入空间偏好方向的独特可能性功能。我们通过将这种方法应用于自然语言应用和生物学的合成和真实定向图形系列的链接预测任务,来显示这种方法的代表性能力。特别是,我们表明低维圆柱形Minkowski和反蒸发空间时间可以产生与曲线的里曼方形更高尺寸的等或更好的图形表达方式。