Discovering latent topics from text corpora has been studied for decades. Many existing topic models adopt a fully unsupervised setting, and their discovered topics may not cater to users' particular interests due to their inability of leveraging user guidance. Although there exist seed-guided topic discovery approaches that leverage user-provided seeds to discover topic-representative terms, they are less concerned with two factors: (1) the existence of out-of-vocabulary seeds and (2) the power of pre-trained language models (PLMs). In this paper, we generalize the task of seed-guided topic discovery to allow out-of-vocabulary seeds. We propose a novel framework, named SeeTopic, wherein the general knowledge of PLMs and the local semantics learned from the input corpus can mutually benefit each other. Experiments on three real datasets from different domains demonstrate the effectiveness of SeeTopic in terms of topic coherence, accuracy, and diversity.
翻译:许多现有专题模型采用完全不受监督的环境,其发现的专题可能无法满足用户的特殊利益,因为用户无法利用用户指导。尽管存在种子指导专题发现方法,利用用户提供的种子发现具有代表性的专题术语,但它们不太关心两个因素:(1) 存在校外种子和(2) 预先培训的语言模型(PLMs)的力量。 在本文中,我们概括了种子指导专题发现的任务,以允许校外种子种子种子。我们提出了一个名为SeeTopic的新框架,在这个框架中,从投入文集中获取的PLMs和地方语义学的一般知识可以相互受益。 从不同领域对三个真实数据集的实验表明SeeTopic在主题一致性、准确性和多样性方面的有效性。