Knowledge graph embedding aims at modeling entities and relations with low-dimensional vectors. Most previous methods require that all entities should be seen during training, which is unpractical for real-world knowledge graphs with new entities emerging on a daily basis. Recent efforts on this issue suggest training a neighborhood aggregator in conjunction with the conventional entity and relation embeddings, which may help embed new entities inductively via their existing neighbors. However, their neighborhood aggregators neglect the unordered and unequal natures of an entity's neighbors. To this end, we summarize the desired properties that may lead to effective neighborhood aggregators. We also introduce a novel aggregator, namely, Logic Attention Network (LAN), which addresses the properties by aggregating neighbors with both rules- and network-based attention weights. By comparing with conventional aggregators on two knowledge graph completion tasks, we experimentally validate LAN's superiority in terms of the desired properties.
翻译:知识图形嵌入的目的是模拟实体和与低维矢量的关系。 大多数以往的方法要求所有实体在培训期间都能看到,而培训对于现实世界知识图形来说是不切实际的,因为每天都有新的实体出现。最近关于这一问题的努力表明,与常规实体和关系嵌入公司一起培训一个邻里聚合器,这可能有助于通过现有邻居嵌入新的实体。然而,它们的邻里聚合器忽视了实体邻居的无序和不平等性质。为此,我们总结了可能导致有效的邻里聚合器的预期特性。我们还引入了一个新型聚合器,即逻辑关注网络(LAN),它通过将邻居聚集在基于规则和网络的注意权重中处理这些特性。我们通过在两个知识图形完成任务上与传统聚合器进行比较,实验性地验证局在理想属性方面的优越性。