In this paper, we present and implement a multi-dimensional, modular framework for performing deep argument analysis (DeepA2) using current pre-trained language models (PTLMs). ArgumentAnalyst -- a T5 model (Raffel et al. 2020) set up and trained within DeepA2 -- reconstructs argumentative texts, which advance an informal argumentation, as valid arguments: It inserts, e.g., missing premises and conclusions, formalizes inferences, and coherently links the logical reconstruction to the source text. We create a synthetic corpus for deep argument analysis, and evaluate ArgumentAnalyst on this new dataset as well as on existing data, specifically EntailmentBank (Dalvi et al. 2021). Our empirical findings vindicate the overall framework and highlight the advantages of a modular design, in particular its ability to emulate established heuristics (such as hermeneutic cycles), to explore the model's uncertainty, to cope with the plurality of correct solutions (underdetermination), and to exploit higher-order evidence.
翻译:在本文中,我们提出并实施了一个多维、模块化的框架,用于利用目前经过预先培训的语言模型进行深入的论证分析(DepeA2)。 参数分析 -- -- 在DeepA2中建立并培训的T5模型(Raffel等人,2020年) -- -- 重建了争议文本,作为有效论据,推动非正式论证:插入,例如,缺失的前提和结论,正式确定推论,并将逻辑重建与源文本连贯地联系起来。我们创建了一个合成材料,用于深入论证分析,并评估关于这一新数据集以及现有数据,特别是EntailmentBank(Dalvi等人,2021年)的数据的论证分析。我们的经验调查结果证实了总体框架,并强调了模块设计的好处,特别是它能够模仿既定的超理论(如 hermenecycycle),探索模型的不确定性,以应对正确解决办法的多元性(根据确定),并利用更高层次的证据。