Several approaches have been developed that generate embeddings for Description Logic ontologies and use these embeddings in machine learning. One approach of generating ontologies embeddings is by first embedding the ontologies into a graph structure, i.e., introducing a set of nodes and edges for named entities and logical axioms, and then applying a graph embedding to embed the graph in $\mathbb{R}^n$. Methods that embed ontologies in graphs (graph projections) have different formal properties related to the type of axioms they can utilize, whether the projections are invertible or not, and whether they can be applied to asserted axioms or their deductive closure. We analyze, qualitatively and quantitatively, several graph projection methods that have been used to embed ontologies, and we demonstrate the effect of the properties of graph projections on the performance of predicting axioms from ontology embeddings. We find that there are substantial differences between different projection methods, and both the projection of axioms into nodes and edges as well ontological choices in representing knowledge will impact the success of using ontology embeddings to predict axioms.
翻译:----
已经开发了多种方法生成描述逻辑本体的嵌入,并在机器学习中使用这些嵌入。一种生成本体嵌入的方法是首先将本体嵌入到图形结构中,即为命名实体和逻辑公理引入一组节点和边缘,然后应用图形嵌入将图形嵌入到$\mathbb{R}^n$中。将本体嵌入到图形中的方法(图形投影)具有与它们可以利用的公理类型有关的不同形式属性,无论投影是否可逆,以及它们是否适用于断言的公理或其演绎闭包。我们定性地和定量地分析了几种用于嵌入本体的图形投影方法,并且我们演示了图形投影的属性对使用本体嵌入来预测公理的表现的影响。我们发现不同的投影方法之间存在重大差异,公理投影以及表示知识的本体选择将影响使用本体嵌入来预测公理的成功。