Text Classification is an important and classical problem in natural language processing. There have been a number of studies that applied convolutional neural networks (convolution on regular grid, e.g., sequence) to classification. However, only a limited number of studies have explored the more flexible graph convolutional neural networks (e.g., convolution on non-grid, e.g., arbitrary graph) for the task. In this work, we propose to use graph convolutional networks for text classification. We build a single text graph for a corpus based on word co-occurrence and document word relations, then learn a Text Graph Convolutional Network (Text GCN) for the corpus. Our Text GCN is initialized with one-hot representation for word and document, it then jointly learns the embeddings for both words and documents, as supervised by the known class labels for documents. Our experimental results on multiple benchmark datasets demonstrate that a vanilla Text GCN without any external word embeddings or knowledge outperforms state-of-the-art methods for text classification. On the other hand, Text GCN also learns predictive word and document embeddings. In addition, experimental results show that the improvement of Text GCN over state-of-the-art comparison methods become more prominent as we lower the percentage of training data, suggesting the robustness of Text GCN to less training data in text classification.
翻译:在自然语言处理中,文本分类是一个重要和古典的问题。 在自然语言处理中, 文本分类是一个重要和古典的问题 。 已经进行了一些研究, 在分类中应用了卷发神经网络( 在常规网格上的演进, 例如, 序列等) 。 但是, 只有少数研究探索了任务中更灵活的图形卷发神经网络( 例如, 在非网格上的演进, 例如, 任意图形图) 。 在这项工作中, 我们提议使用图形卷发网络来进行文本分类。 我们为基于单词共存和文档关系的元素构建了一个单一的文本图表图( 在常规网格上革命网络上演进化, 例如, 序列序列序列序列序列序列中学习了一个文本 。 但是, 我们的文本 GCN 最初以单词和文档的一热代表形式初始化 GCN, 然后在已知的文件类标签中共同学习两个词和文件的嵌入。 我们在多个基准数据集中的实验结果显示, 没有外部词嵌入或知识, 我们的版本的状态是文本加新文本的分类方法。 在另一方面, 我们的实验性GCN 将GCN 测试的文本的文本的文本的文本变为较低的文本的文本的文本变换的文本 。