相关内容

BERT全称Bidirectional Encoder Representations from Transformers,是预训练语言表示的方法,可以在大型文本语料库(如维基百科)上训练通用的“语言理解”模型,然后将该模型用于下游NLP任务,比如机器翻译、问答。

【导读】2020 年 2 月 7 日-2 月 12 日,AAAI 2020 在美国纽约举办。Michael Galkin撰写了AAAI2020知识图谱论文相关研究趋势包括:KG-Augmented语言模型,异构KGs中的实体匹配,KG完成和链路预测,基于kg的会话人工智能和问题回答,包括论文,值得查看!

Hiroaki Hayashi, Zecong Hu, Chenyan Xiong, Graham Neubig: Latent Relation Language Models. AAAI 2020

  • 潜在关系语言模型:本文提出了一种潜在关系语言模型(LRLMs),这是一类通过知识图谱关系对文档中词语的联合分布及其所包含的实体进行参数化的语言模型。该模型具有许多吸引人的特性:它不仅提高了语言建模性能,而且能够通过关系标注给定文本的实体跨度的后验概率。实验证明了基于单词的基线语言模型和先前合并知识图谱信息的方法的经验改进。定性分析进一步证明了该模型的学习能力,以预测适当的关系在上下文中。

成为VIP会员查看完整内容
0
133

Knowledge graphs are important resources for many artificial intelligence tasks but often suffer from incompleteness. In this work, we propose to use pre-trained language models for knowledge graph completion. We treat triples in knowledge graphs as textual sequences and propose a novel framework named Knowledge Graph Bidirectional Encoder Representations from Transformer (KG-BERT) to model these triples. Our method takes entity and relation descriptions of a triple as input and computes scoring function of the triple with the KG-BERT language model. Experimental results on multiple benchmark knowledge graphs show that our method can achieve state-of-the-art performance in triple classification, link prediction and relation prediction tasks.

0
8
下载
预览

We present, to our knowledge, the first application of BERT to document classification. A few characteristics of the task might lead one to think that BERT is not the most appropriate model: syntactic structures matter less for content categories, documents can often be longer than typical BERT input, and documents often have multiple labels. Nevertheless, we show that a straightforward classification model using BERT is able to achieve the state of the art across four popular datasets. To address the computational expense associated with BERT inference, we distill knowledge from BERT-large to small bidirectional LSTMs, reaching BERT-base parity on multiple datasets using 30x fewer parameters. The primary contribution of our paper is improved baselines that can provide the foundation for future work.

0
4
下载
预览

This technical note describes a new baseline for the Natural Questions. Our model is based on BERT and reduces the gap between the model F1 scores reported in the original dataset paper and the human upper bound by 30% and 50% relative for the long and short answer tasks respectively. This baseline has been submitted to the official NQ leaderboard at ai.google.com/research/NaturalQuestions. Code, preprocessed data and pretrained model are available at https://github.com/google-research/language/tree/master/language/question_answering/bert_joint.

0
7
下载
预览

With the tremendous growth in the number of scientific papers being published, searching for references while writing a scientific paper is a time-consuming process. A technique that could add a reference citation at the appropriate place in a sentence will be beneficial. In this perspective, context-aware citation recommendation has been researched upon for around two decades. Many researchers have utilized the text data called the context sentence, which surrounds the citation tag, and the metadata of the target paper to find the appropriate cited research. However, the lack of well-organized benchmarking datasets and no model that can attain high performance has made the research difficult. In this paper, we propose a deep learning based model and well-organized dataset for context-aware paper citation recommendation. Our model comprises a document encoder and a context encoder, which uses Graph Convolutional Networks (GCN) layer and Bidirectional Encoder Representations from Transformers (BERT), which is a pre-trained model of textual data. By modifying the related PeerRead dataset, we propose a new dataset called FullTextPeerRead containing context sentences to cited references and paper metadata. To the best of our knowledge, This dataset is the first well-organized dataset for context-aware paper recommendation. The results indicate that the proposed model with the proposed datasets can attain state-of-the-art performance and achieve a more than 28% improvement in mean average precision (MAP) and recall@k.

0
9
下载
预览
小贴士
相关VIP内容
专知会员服务
63+阅读 · 2020年4月7日
【AAAI2020知识图谱论文概述】Knowledge Graphs @ AAAI 2020
专知会员服务
100+阅读 · 2020年2月13日
专知会员服务
133+阅读 · 2020年2月13日
BERT进展2019四篇必读论文
专知会员服务
52+阅读 · 2020年1月2日
【Google论文】ALBERT:自我监督学习语言表达的精简BERT
专知会员服务
18+阅读 · 2019年11月4日
知识图谱本体结构构建论文合集
专知会员服务
60+阅读 · 2019年10月9日
相关资讯
17篇必看[知识图谱Knowledge Graphs] 论文@AAAI2020
【资源】最新BERT相关论文清单汇总
专知
33+阅读 · 2019年10月2日
8篇论文梳理BERT相关模型进展与反思 | MSRA出品
带你读论文丨 8 篇论文梳理 BERT 相关模型
新智元
8+阅读 · 2019年9月9日
ACL 2019 | 多语言BERT的语言表征探索
AI科技评论
20+阅读 · 2019年9月6日
Github项目推荐 | awesome-bert:BERT相关资源大列表
AI研习社
27+阅读 · 2019年2月26日
论文浅尝 |「知识表示学习」专题论文推荐
开放知识图谱
13+阅读 · 2018年2月12日
「知识表示学习」专题论文推荐 | 每周论文清单
相关论文
Markus Eberts,Adrian Ulges
6+阅读 · 2019年9月17日
K-BERT: Enabling Language Representation with Knowledge Graph
Weijie Liu,Peng Zhou,Zhe Zhao,Zhiruo Wang,Qi Ju,Haotang Deng,Ping Wang
17+阅读 · 2019年9月17日
Sheng Shen,Zhen Dong,Jiayu Ye,Linjian Ma,Zhewei Yao,Amir Gholami,Michael W. Mahoney,Kurt Keutzer
3+阅读 · 2019年9月12日
Liang Yao,Chengsheng Mao,Yuan Luo
8+阅读 · 2019年9月11日
Ashutosh Adhikari,Achyudh Ram,Raphael Tang,Jimmy Lin
4+阅读 · 2019年8月22日
X-BERT: eXtreme Multi-label Text Classification with BERT
Wei-Cheng Chang,Hsiang-Fu Yu,Kai Zhong,Yiming Yang,Inderjit Dhillon
11+阅读 · 2019年7月4日
Tianyi Zhang,Varsha Kishore,Felix Wu,Kilian Q. Weinberger,Yoav Artzi
5+阅读 · 2019年4月21日
Yang Liu
21+阅读 · 2019年3月25日
Chris Alberti,Kenton Lee,Michael Collins
7+阅读 · 2019年3月21日
A Context-Aware Citation Recommendation Model with BERT and Graph Convolutional Networks
Chanwoo Jeong,Sion Jang,Hyuna Shin,Eunjeong Park,Sungchul Choi
9+阅读 · 2019年3月15日
Top