The spread of misinformation, propaganda, and flawed argumentation has been amplified in the Internet era. Given the volume of data and the subtlety of identifying violations of argumentation norms, supporting information analytics tasks, like content moderation, with trustworthy methods that can identify logical fallacies is essential. In this paper, we formalize prior theoretical work on logical fallacies into a comprehensive three-stage evaluation framework of detection, coarse-grained, and fine-grained classification. We adapt existing evaluation datasets for each stage of the evaluation. We employ three families of robust and explainable methods based on prototype reasoning, instance-based reasoning, and knowledge injection. The methods combine language models with background knowledge and explainable mechanisms. Moreover, we address data sparsity with strategies for data augmentation and curriculum learning. Our three-stage framework natively consolidates prior datasets and methods from existing tasks, like propaganda detection, serving as an overarching evaluation testbed. We extensively evaluate these methods on our datasets, focusing on their robustness and explainability. Our results provide insight into the strengths and weaknesses of the methods on different components and fallacy classes, indicating that fallacy identification is a challenging task that may require specialized forms of reasoning to capture various classes. We share our open-source code and data on GitHub to support further work on logical fallacy identification.
翻译:在互联网时代,错误、宣传和有缺陷的论据的传播已经扩大。鉴于数据数量庞大,而且查明违反论证规范的情况具有微妙性,因此,必须使用可靠方法分析信息,例如内容温和,支持信息分析任务,采用可靠方法,确定逻辑谬误。在本文件中,我们正式确定关于逻辑谬误的先前理论性工作,形成一个全面的三阶段评价框架,包括检测、粗糙和细微的分类;我们为每个评价阶段调整现有的评价数据集;我们采用三个组合,根据原型推理、实例推理和知识注入,采用强有力和可解释的方法。这些方法将语言模型与背景知识和可解释的机制相结合。此外,我们用数据扩增和课程学习的战略来处理数据宽度问题。我们的三阶段框架将先前的数据集和方法与现有任务(如宣传探测,用作总体评价测试)。我们广泛评价了我们各数据集的这些方法,侧重于其坚固性和可解释性。我们的成果为不同组成部分和谬误分析类方法的优点和弱点提供了进一步洞察。我们从不同类别和误判分解的分类,我们的任务要求分辨分析不同的分类。