Open-textured terms in written rules are typically settled through interpretive argumentation. Ongoing work has attempted to catalogue the schemes used in such interpretive argumentation. But how can the use of these schemes affect the way in which people actually use and reason over the proper interpretations of open-textured terms? Using the interpretive argument-eliciting game Aporia as our framework, we carried out an empirical study to answer this question. Differing from previous work, we did not allow participants to argue for interpretations arbitrarily, but to only use arguments that fit with a given set of interpretive argument templates. Finally, we analyze the results captured by this new dataset, specifically focusing on practical implications for the development of interpretation-capable artificial reasoners.
翻译:书面规则中的开放文字术语通常通过解释性论证解决。 正在进行的工作试图将这种解释性论证中使用的计划编成目录。 但是,使用这些计划如何影响人们实际使用和理性如何解释开放文字术语的正确解释? 我们利用解释性论证游戏Aporia作为框架,进行了一项经验研究来回答这个问题。 与以前的工作不同,我们不允许参与者提出任意解释的理由,而只允许使用与一套特定的解释性论证模板相匹配的论点。 最后,我们分析了这一新数据集所捕捉的结果,特别侧重于对开发具有解释能力的人工解释者的实际影响。