Topic models have been widely used to learn text representations and gain insight into document corpora. To perform topic discovery, most existing neural models either take document bag-of-words (BoW) or sequence of tokens as input followed by variational inference and BoW reconstruction to learn topic-word distribution. However, leveraging topic-word distribution for learning better features during document encoding has not been explored much. To this end, we develop a framework TAN-NTM, which processes document as a sequence of tokens through a LSTM whose contextual outputs are attended in a topic-aware manner. We propose a novel attention mechanism which factors in topic-word distribution to enable the model to attend on relevant words that convey topic related cues. The output of topic attention module is then used to carry out variational inference. We perform extensive ablations and experiments resulting in ~9-15 percentage improvement over score of existing SOTA topic models in NPMI coherence on several benchmark datasets - 20Newsgroups, Yelp Review Polarity and AGNews. Further, we show that our method learns better latent document-topic features compared to existing topic models through improvement on two downstream tasks: document classification and topic guided keyphrase generation.
翻译:为了进行专题发现,大多数现有的神经模型要么将文件词包或象征序列作为投入,然后进行变式推断,然后进行 BoW 重建,以学习主题词的分发。然而,在文件编码过程中,没有广泛探讨利用专题词的分发来学习更好的特征。为此,我们开发了一个TAN-NTM框架,通过一个LSTM文件作为象征序列进行文件处理,该LSTM文件背景产出以专题意识方式得到处理。我们提议了一个新的注意机制,在专题词分发中将专题词的分发因素作为投入,然后将文件符号序列作为传递主题相关提示的相关词。然后,将主题注意模块的产出用于进行变式推断。我们进行了广泛的推算和实验,结果是,在NPNI中现有的SOTA专题模型的得分比分提高了9.15%。在若干基准数据集上,20个Newsgroups,Yelp Accurity and AGNews上,我们展示了我们的方法学习了更好的潜在文件版关键特征,与现有专题分类相比,通过两个模式改进。