Recent work has utilised knowledge-aware approaches to natural language understanding, question answering, recommendation systems, and other tasks. These approaches rely on well-constructed and large-scale knowledge graphs that can be useful for many downstream applications and empower knowledge-aware models with commonsense reasoning. Such knowledge graphs are constructed through knowledge acquisition tasks such as relation extraction and knowledge graph completion. This work seeks to utilise and build on the growing body of work that uses findings from the field of natural language processing (NLP) to extract knowledge from text and build knowledge graphs. The focus of this research project is on how we can use transformer-based approaches to extract and contextualise event information, matching it to existing ontologies, to build a comprehensive knowledge of graph-based event representations. Specifically, sub-event extraction is used as a way of creating sub-event-aware event representations. These event representations are then further enriched through fine-grained location extraction and contextualised through the alignment of historically relevant quotes.
翻译:最近的工作利用了了解自然语言、回答问题、建议系统和其他任务的知识意识方法,这些方法依靠结构完善的大规模知识图,这些图可用于许多下游应用,并以常识推理增强知识意识模型;这些知识图是通过获取知识的任务,如关系提取和完成知识图等构建的;这项工作力求利用和利用越来越多的工作,利用自然语言处理领域的调查结果,从文字中提取知识,建立知识图;本研究项目的重点是如何利用基于变压器的方法,提取和介绍事件信息,将其与现有的理论相匹配,以建立关于基于图形的事件表述的全面知识;具体地说,利用次活动提取作为创建次级活动了解事件表述的一种方式;然后,通过精确的定位提取,并通过调整历史相关引文,进一步丰富这些活动的表述。</s>