Knowledge Graphs are a great resource to capture semantic knowledge in terms of entities and relationships between the entities. However, current deep learning models takes as input distributed representations or vectors. Thus, the graph is compressed in a vectorized representation. We conduct a study to examine if the deep learning model can compress a graph and then output the same graph with most of the semantics intact. Our experiments show that Transformer models are not able to express the full semantics of the input knowledge graph. We find that this is due to the disparity between the directed, relationship and type based information contained in a Knowledge Graph and the fully connected token-token undirected graphical interpretation of the Transformer Attention matrix.
翻译:知识图是用实体和实体之间的关系来捕捉语义知识的伟大资源。然而,目前的深层次学习模型以输入分布式表示或矢量作为输入式表示或矢量。因此,该图被压缩为矢量表示。我们进行了一项研究,以研究深层次学习模型能否压缩一个图形,然后用大多数语义完整地输出相同的图形。我们的实验表明,变换模型无法表达输入式知识图的全部语义。我们发现,这是因为知识图中包含的定向、关系和类型信息与变换者注意矩阵完全相连的象征性非定向图形解释之间存在差异。