Classifying semantic relations between entity pairs in sentences is an important task in Natural Language Processing (NLP). Most previous models for relation classification rely on the high-level lexical and syntactic features obtained by NLP tools such as WordNet, dependency parser, part-of-speech (POS) tagger, and named entity recognizers (NER). In addition, state-of-the-art neural models based on attention mechanisms do not fully utilize information of entity that may be the most crucial features for relation classification. To address these issues, we propose a novel end-to-end recurrent neural model which incorporates an entity-aware attention mechanism with a latent entity typing (LET) method. Our model not only utilizes entities and their latent types as features effectively but also is more interpretable by visualizing attention mechanisms applied to our model and results of LET. Experimental results on the SemEval-2010 Task 8, one of the most popular relation classification task, demonstrate that our model outperforms existing state-of-the-art models without any high-level features.
翻译:在自然语言处理系统(NLP)中,分类对等实体之间的语义关系是一项重要任务。以往的大多数关系分类模式都依赖NLP工具(如WordNet、依赖分析器、部分语音标签和名称实体识别器(NER))获得的高层次词汇和合成特征。此外,基于关注机制的先进神经模型没有充分利用可能是关系分类最关键特征的实体信息。为了解决这些问题,我们提议了一个新的端到端经常性神经模型,其中含有一个具有潜伏实体打字法的实体觉注意机制。我们的模型不仅将实体及其潜在类型作为特征加以有效利用,而且通过对我们的模型和LEAT结果应用的视觉关注机制更容易解释。关于SemEval-2010任务8的实验性结果(最受欢迎的关系分类任务之一)表明,我们的模型超越了目前没有高层次特征的状态模型。