PageRank,网页排名,又称网页级别、Google左侧排名或佩奇排名,是一种由[1] 根据网页之间相互的超链接计算的技术,而作为网页排名的要素之一,以Google公司创办人拉里·佩奇(Larry Page)之姓来命名。Google用它来体现网页的相关性和重要性,在搜索引擎优化操作中是经常被用来评估网页优化的成效因素之一。Google的创始人拉里·佩奇和谢尔盖·布林于1998年在斯坦福大学发明了这项技术。

VIP内容

对于提取关键词的研究在十九世纪就已经开始了,但是基于encoder-decoder框架和sequence-to-squence序列学习的高级抽象方法,直到最近才被探索出来。事实上,在过去的三年内,学术界已经提出了十几种抽象的方法,能够生成有意义的关键词,并且效果良好。

介绍: 在这篇综述中,我们研究了提取关键词方法的各个方面,其中主要关注基于神经网络的较新的抽象方法。特别地,我们注意到这种机制能够驱动后者变得更加完善。本文还介绍了近二十年来各种关键词生成和文本摘要的研究模式和发展趋势。我们首先会回顾一下最流行的KE方法,特别是有监督的、基于图的和其他无监督的方法。接下来我们会描述目前流行的关键词数据集:OAGKX。这个数据集可以被用作数据源去训练可监督的KG方法或者从更加具体的学科去生成其他副产品。

抽取关键词生成模型:

1.可监督式模型

KEA算法(Keyphrase Extraction Algorithm)使用类似TF-IDF和first occurrence这样的特征,然后使用朴素贝叶斯分类器来判断候选短语是否是关键短语。而在多个方面继承KEA的Maui算法则是又前进了一步。它结合多种类型的特征,并利用维基百科的文章作为语言知识的来源。也有一些尝试是通过探索各种特征设置来改善现有的方法,例如有学者就通过调研n-grams,noun phrases,PoS tags等特征设置得出结论:与只使用n-gram相比,使用与POS tags模式匹配的单词或n-gram可以提高召回率。

2.基于图的方法

与无监督抽取KG方法相比,基于图的方法所需的计算资源是最多的。TextRank是一种基于图的排序方法,来源于PageRank算法。在上面的基础之上进化出了SingleRank和ExpandRank方法,从实验结果上看,在任何尺寸的邻域上,ExpandRank都要优于SinleRank。速度最快且可用的方法是RAKE,与TextRank相比,RAKE精确度更高,召回率更小。

3.其他方法

除了上面提到的两类之外,还有一些是无监督的且不基于图的方法。他们中的大多数利用聚类和各种相似性度量来寻找最佳关键字段。如经典的TF-IDF就是计算分数并且对整篇文档的文本短语进行排序。同时,TF-IDF也是在KG方法研究中最常用的baseline之一。

成为VIP会员查看完整内容
Keyphrase Generation A Multi-Aspect Survey.pdf
0
45

最新内容

This paper shows how to generate efficient tensor algebra code that compute on dynamic sparse tensors, which have sparsity structures that evolve over time. We propose a language for precisely specifying recursive, pointer-based data structures, and we show how this language can express a wide range of dynamic data structures that support efficient modification, such as linked lists, binary search trees, and B-trees. We then describe how, given high-level specifications of such data structures, a compiler can generate code to efficiently iterate over and compute with dynamic sparse tensors that are stored in the aforementioned data structures. Furthermore, we define an abstract interface that captures how nonzeros can be inserted into dynamic data structures, and we show how this abstraction guides a compiler to emit efficient code that store the results of sparse tensor algebra computations in dynamic data structures. We evaluate our technique and find that it generates efficient dynamic sparse tensor algebra kernels. Code that our technique emits to compute the main kernel of the PageRank algorithm is 1.05$\times$ as fast as Aspen, a state-of-the-art dynamic graph processing framework. Furthermore, our technique outperforms PAM, a parallel ordered (key-value) maps library, by 7.40$\times$ when used to implement element-wise addition of a dynamic sparse matrix to a static sparse matrix.

0
0
下载
预览

最新论文

This paper shows how to generate efficient tensor algebra code that compute on dynamic sparse tensors, which have sparsity structures that evolve over time. We propose a language for precisely specifying recursive, pointer-based data structures, and we show how this language can express a wide range of dynamic data structures that support efficient modification, such as linked lists, binary search trees, and B-trees. We then describe how, given high-level specifications of such data structures, a compiler can generate code to efficiently iterate over and compute with dynamic sparse tensors that are stored in the aforementioned data structures. Furthermore, we define an abstract interface that captures how nonzeros can be inserted into dynamic data structures, and we show how this abstraction guides a compiler to emit efficient code that store the results of sparse tensor algebra computations in dynamic data structures. We evaluate our technique and find that it generates efficient dynamic sparse tensor algebra kernels. Code that our technique emits to compute the main kernel of the PageRank algorithm is 1.05$\times$ as fast as Aspen, a state-of-the-art dynamic graph processing framework. Furthermore, our technique outperforms PAM, a parallel ordered (key-value) maps library, by 7.40$\times$ when used to implement element-wise addition of a dynamic sparse matrix to a static sparse matrix.

0
0
下载
预览
Top