Recent empirical studies show that adversarial topic models (ATM) can successfully capture semantic patterns of the document by differentiating a document with another dissimilar sample. However, utilizing that discriminative-generative architecture has two important drawbacks: (1) the architecture does not relate similar documents, which has the same document-word distribution of salient words; (2) it restricts the ability to integrate external information, such as sentiments of the document, which has been shown to benefit the training of neural topic model. To address those issues, we revisit the adversarial topic architecture in the viewpoint of mathematical analysis, propose a novel approach to re-formulate discriminative goal as an optimization problem, and design a novel sampling method which facilitates the integration of external variables. The reformulation encourages the model to incorporate the relations among similar samples and enforces the constraint on the similarity among dissimilar ones; while the sampling method, which is based on the internal input and reconstructed output, helps inform the model of salient words contributing to the main topic. Experimental results show that our framework outperforms other state-of-the-art neural topic models in three common benchmark datasets that belong to various domains, vocabulary sizes, and document lengths in terms of topic coherence.
翻译:最近的实证研究表明,对抗性专题模型(ATM)通过将文件与另一个不同样本区别开来,能够成功地捕捉到文件的语义模式。然而,利用这种歧视性的遗传结构,有两个重要的缺点:(1) 结构与类似的文件无关,而这种文件的文字分布相同;(2) 它限制了整合外部信息的能力,例如文件的情感,这种情感已经显示有利于神经专题模型的培训。为了解决这些问题,我们从数学分析的角度重新审视对抗性专题结构,提出了将歧视目标重新拟订为优化问题的新办法,并设计了一种新的抽样方法,便利外部变量的整合。重新拟订鼓励模型纳入类似样本之间的关系,对不同词的相似性施加限制;而基于内部投入和重新产出的抽样方法有助于为主要专题的突出词模式提供信息。实验结果显示,我们的框架在三个共同的基准数据模型中,在三个共同的基准数据模型中,在不同的领域、大小和一致性方面,都属于不同的领域、大小和指标的一致性。