The ease and the speed of spreading misinformation and propaganda on the Web motivate the need to develop trustworthy technology for detecting fallacies in natural language arguments. However, state-of-the-art language modeling methods exhibit a lack of robustness on tasks like logical fallacy classification that require complex reasoning. In this paper, we propose a Case-Based Reasoning method that classifies new cases of logical fallacy by language-modeling-driven retrieval and adaptation of historical cases. We design four complementary strategies to enrich the input representation for our model, based on external information about goals, explanations, counterarguments, and argument structure. Our experiments in in-domain and out-of-domain settings indicate that Case-Based Reasoning improves the accuracy and generalizability of language models. Our ablation studies confirm that the representations of similar cases have a strong impact on the model performance, that models perform well with fewer retrieved cases, and that the size of the case database has a negligible effect on the performance. Finally, we dive deeper into the relationship between the properties of the retrieved cases and the model performance.
翻译:在网上传播错误信息和宣传的方便和速度促使需要开发可靠的技术,以发现自然语言论点中的谬误。然而,最先进的语言模型方法显示,在需要复杂推理的逻辑谬误分类等任务上缺乏稳健性。在本文件中,我们提议基于案例的推理方法,通过语言模型驱动的检索和历史案例的适应,将新的逻辑谬误案例分类。我们设计了四项补充战略,以根据关于目标、解释、对等和争论结构的外部信息,丰富我们模型的输入代表。我们在主页和主页外环境的实验表明,基于案例的推理提高了语言模型的准确性和可概括性。我们的推理研究证实,类似案例的表述对模型性能有强烈的影响,模型在检索的案件较少的情况下运作良好,而且案件数据库的规模对业绩的影响微乎其微。最后,我们更深入地探讨已检索案件的性质与模型性能之间的关系。