An interpretable system for open-domain reasoning needs to express its reasoning process in a transparent form. Natural language is an attractive representation for this purpose -- it is both highly expressive and easy for humans to understand. However, manipulating natural language statements in logically consistent ways is hard: models must cope with variation in how meaning is expressed while remaining precise. In this paper, we describe ParaPattern, a method for building models to generate deductive inferences from diverse natural language inputs without direct human supervision. We train BART-based models (Lewis et al., 2020) to generate the result of applying a particular logical operation to one or more premise statements. Crucially, we develop a largely automated pipeline for constructing suitable training examples from Wikipedia. We evaluate our models using out-of-domain sentence compositions from the QASC (Khot et al., 2020) and EntailmentBank (Dalvi et al., 2021) datasets as well as targeted perturbation sets. Our results show that our models are substantially more accurate and flexible than baseline systems. ParaPattern achieves 85% validity on examples of the 'substitution' operation from EntailmentBank without the use of any in-domain training data, matching the performance of a model fine-tuned for EntailmentBank. The full source code for our method is publicly available.
翻译:开放域推理的可解释性系统需要以透明的形式表达其推理过程。自然语言是用于此目的的一种有吸引力的表述方式 -- -- 自然语言既具有高度的表达性,又便于人类理解。然而,以逻辑一致的方式操纵自然语言的语句是很困难的:模型必须应对表达含义的差异,同时保持准确性。本文描述了Parapaterter,这是在没有直接的人类监督的情况下从各种自然语言投入中产生推论推论的模型的一种方法。我们培训了基于BART的模型(Lewis等人,2020年),以产生对一个或多个前提语句应用特定逻辑操作的结果。至关重要的是,我们开发了一条基本上自动化的管道,用于从维基百科构建合适的培训实例。我们用QASC(Khot等人,2020年)和EntailmentBank(Dalvi等人,2021年)的模型集,以及有针对性的扰动数据集。我们的成果显示,我们的模型比基线系统更准确、更灵活得多。ParaPattertern,我们开发了一个基本操作的85 %的系统,用于对数据库进行精确的升级。