Word feature vectors have been proven to improve many NLP tasks. With recent advances in unsupervised learning of these feature vectors, it became possible to train it with much more data, which also resulted in better quality of learned features. Since it learns joint probability of latent features of words, it has the advantage that we can train it without any prior knowledge about the goal task we want to solve. We aim to evaluate the universal applicability property of feature vectors, which has been already proven to hold for many standard NLP tasks like part-of-speech tagging or syntactic parsing. In our case, we want to understand the topical focus of text documents and design an efficient representation suitable for discriminating different topics. The discriminativeness can be evaluated adequately on text categorisation task. We propose a novel method to extract discriminative keywords from documents. We utilise word feature vectors to understand the relations between words better and also understand the latent topics which are discussed in the text and not mentioned directly but inferred logically. We also present a simple way to calculate document feature vectors out of extracted discriminative words. We evaluate our method on the four most popular datasets for text categorisation. We show how different discriminative metrics influence the overall results. We demonstrate the effectiveness of our approach by achieving state-of-the-art results on text categorisation task using just a small number of extracted keywords. We prove that word feature vectors can substantially improve the topical inference of documents' meaning. We conclude that distributed representation of words can be used to build higher levels of abstraction as we demonstrate and build feature vectors of documents.
翻译:单词特性矢量已被证明可以改进许多 NLP 任务。 最近,在未监督的这些特性矢量的高级学习方面有了进步,因此有可能用更多的数据对它进行培训,从而使得学习到的特性的质量得到提高。由于它学习了单词潜在特性的共同概率,因此它的好处是,我们可以在不事先了解我们想要解决的目标任务的情况下对它进行培训。我们的目标是评价特性矢量的普遍适用性属性,这已被证明可以适用于许多标准的 NLP 任务,如部分直径标记或合成分析。在我们的例子中,我们希望理解文本文件的当前焦点,并设计一种适合区别主题的高效表达方式。由于它学习了单词潜在特性的共性能,因此我们可以在文本分类中充分评价它。我们用一个新颖的方法从文档中提取歧视性关键字。我们用这个方法来理解文字之间的关联,并且理解文字中讨论的隐性议题,而不是直接但从逻辑上推断的。我们还可以用一个简单的方法来计算用于解析性文档的小型矢量文件。我们用一个工具在解析的文本的精度上,我们用一个最有区别性的结论性的数据结果来解释。我们用一个方法来证明我们用到最精确的文本的路径来解释。我们用四个的路径来解释了。 我们用一个方法,我们用一个方法来解释的基质化文本的基质性结果。我们用一个方法来解释了。我们用一个方法来解释的基质性的数据。我们用来来显示我们用来来解释的文本的基质化的基质性结果。