Text classification is an important and classical problem in natural language processing. There have been a number of studies that applied convolutional neural networks (convolution on regular grid, e.g., sequence) to classification. However, only a limited number of studies have explored the more flexible graph convolutional neural networks (convolution on non-grid, e.g., arbitrary graph) for the task. In this work, we propose to use graph convolutional networks for text classification. We build a single text graph for a corpus based on word co-occurrence and document word relations, then learn a Text Graph Convolutional Network (Text GCN) for the corpus. Our Text GCN is initialized with one-hot representation for word and document, it then jointly learns the embeddings for both words and documents, as supervised by the known class labels for documents. Our experimental results on multiple benchmark datasets demonstrate that a vanilla Text GCN without any external word embeddings or knowledge outperforms state-of-the-art methods for text classification. On the other hand, Text GCN also learns predictive word and document embeddings. In addition, experimental results show that the improvement of Text GCN over state-of-the-art comparison methods become more prominent as we lower the percentage of training data, suggesting the robustness of Text GCN to less training data in text classification.
翻译:在自然语言处理中,文本分类是一个重要和古典的问题。 在自然语言处理中, 有一些研究应用了进化神经网络( 在常规网格上演进, 例如, 序列) 进行分类。 但是, 只有少数研究探索了任务中更灵活的图形进化神经网络( 在非格网上演进, 例如, 任意图形) 。 在这项工作中, 我们提议使用图形进化网络来进行文本分类。 我们为基于单词共生和文档的单词创建了一个单个文本图表图, 然后为文体学习一个文本变异网络( Text GCN ) 。 我们的文本 GCN 初始化为单词和文档的一热表示, 然后在已知文件类标签的监督下, 联合学习了两个词和文档的嵌入。 我们在多个基准数据集上的实验结果显示, 香草文本GCN 的嵌入或知识不符合新格式的文本分类方法。 在另一方面, 我们的文本GCN 也学习GCN 的预测性强度测试文本, 显示更低的G节化的文本, 将数据嵌入数据显示为较低的GCN 的测试文本。