Embedding-based neural topic models could explicitly represent words and topics by embedding them to a homogeneous feature space, which shows higher interpretability. However, there are no explicit constraints for the training of embeddings, leading to a larger optimization space. Also, a clear description of the changes in embeddings and the impact on model performance is still lacking. In this paper, we propose an embedding regularized neural topic model, which applies the specially designed training constraints on word embedding and topic embedding to reduce the optimization space of parameters. To reveal the changes and roles of embeddings, we introduce \textbf{uniformity} into the embedding-based neural topic model as the evaluation metric of embedding space. On this basis, we describe how embeddings tend to change during training via the changes in the uniformity of embeddings. Furthermore, we demonstrate the impact of changes in embeddings in embedding-based neural topic models through ablation studies. The results of experiments on two mainstream datasets indicate that our model significantly outperforms baseline models in terms of the harmony between topic quality and document modeling. This work is the first attempt to exploit uniformity to explore changes in embeddings of embedding-based neural topic models and their impact on model performance to the best of our knowledge.
翻译:嵌入式神经专题模型可以通过将其嵌入到同一特性空间来明确代表单词和主题,这显示了更高的解释性。然而,对于嵌入式模块的培训没有明确的限制,从而导致更大的优化空间。此外,对于嵌入式的变化和对模型性能的影响仍然缺乏清晰的说明。在本文件中,我们提议一个嵌入式神经专题模型,将专门设计的对嵌入和嵌入单词和嵌入专题的培训限制用于缩小参数的优化空间。为了揭示嵌入式模块的变化和作用,我们引入了基于嵌入式神经专题模型作为嵌入空间的评价标准。在此基础上,我们描述了嵌入式模块的变化和对模型性能的影响如何通过嵌入式模块的统一性变化。此外,我们展示了嵌入以内嵌入为基的神经专题模型的影响。两个基于主流数据集的实验结果表明,我们基于嵌入式模块的模型在模型质量和嵌入式模型之间的和谐性模型方面大大超出了基准模型。我们试图探索在嵌入式模型和嵌入式模型方面进行的最佳影响。