Document classification is the detection specific content of interest in text documents. In contrast to the data-driven machine learning classifiers, knowledge-based classifiers can be constructed based on domain specific knowledge, which usually takes the form of a collection of subject related keywords. While typical knowledge-based classifiers compute a prediction score based on the keyword abundance, it generally suffers from noisy detections due to the lack of guiding principle in gauging the keyword matches. In this paper, we propose a novel knowledge-based model equipped with Shannon Entropy, which measures the richness of information and favors uniform and diverse keyword matches. Without invoking any positive sample, such method provides a simple and explainable solution for document classification. We show that the Shannon Entropy significantly improves the recall at fixed level of false positive rate. Also, we show that the model is more robust against change of data distribution at inference while compared with traditional machine learning, particularly when the positive training samples are very limited.
翻译:与数据驱动的机器学习分类师相比,基于知识的分类师可以基于特定领域的知识,通常采取收集主题相关关键词的形式。典型基于知识的分类师根据关键词的丰度计算预测分数,但由于缺乏衡量关键词匹配情况的指导原则,通常会因探测到噪音而感到困扰。在本文中,我们提议了一个新的基于知识的模型,该模型配有香农 Entropy,该模型测量信息的丰富性,有利于统一和多样的关键词匹配。在不援引任何正面样本的情况下,这种方法为文件分类提供了简单和可解释的解决办法。我们表明香农 Entropy大大改进了固定水平的虚假正率。此外,我们还表明,该模型在与传统机器学习相比,特别是在积极的培训样本非常有限的情况下,相对于误判数据分配的变化更为有力。