We present an end-to-end approach that takes unstructured textual input and generates structured output compliant with a given vocabulary. Inspired by recent successes in neural machine translation, we treat the triples within a given knowledge graph as an independent graph language and propose an encoder-decoder framework with an attention mechanism that leverages knowledge graph embeddings. Our model learns the mapping from natural language text to triple representation in the form of subject-predicate-object using the selected knowledge graph vocabulary. Experiments on three different data sets show that we achieve competitive F1-Measures over the baselines using our simple yet effective approach. A demo video is included.
翻译:我们提出了一个端对端方法,采用非结构化文本输入,并生成符合特定词汇的结构化输出。在神经机翻译最近的成功激励下,我们将特定知识图表中的三重数据作为独立的图表语言处理,并提出一个使用知识图嵌入的注意机制的编码器解码器框架。我们的模型利用选定的知识图词词汇表,学习自然语言文字的绘图,以主题预言点的形式进行三重表述。对三个不同的数据集的实验显示,我们利用简单有效的方法,在基线上实现了有竞争力的F1措施。其中包含一个演示视频。