We study the detection of propagandistic text fragments in news articles. Instead of merely learning from input-output datapoints in training data, we introduce an approach to inject declarative knowledge of fine-grained propaganda techniques. We leverage declarative knowledge expressed in both natural language and first-order logic. The former refers to the literal definition of each propaganda technique, which is utilized to get class representations for regularizing the model parameters. The latter refers to logical consistency between coarse- and fine- grained predictions, which is used to regularize the training process with propositional Boolean expressions. We conduct experiments on Propaganda Techniques Corpus, a large manually annotated dataset for fine-grained propaganda detection. Experiments show that our method achieves superior performance, demonstrating that injecting declarative knowledge expressed in both natural language and first-order logic can help the model to make more accurate predictions.
翻译:我们研究新闻文章中的传播文字碎片。我们不仅从培训数据中的输入-输出数据点中学习,还采用一种方法来注入精细宣传技术的宣示知识。我们利用自然语言和一阶逻辑表达的宣示知识。前一种方法是指每种宣传技术的字面定义,用于为规范示范参数进行课堂表述。后一种方法是指粗糙和精细预测之间的逻辑一致性,用于用普惠性表达方式规范培训过程。我们在Propaganda Techniques Corpus上进行了实验,这是用于精细宣传检测的大型人工附加说明数据集。实验表明,我们的方法取得了优异性,表明以自然语言和一阶逻辑表达的注入宣示知识有助于模型作出更准确的预测。