The recent success of Bayesian methods in neuroscience and artificial intelligence gives rise to the hypothesis that the brain is a Bayesian machine. Since logic and learning are both practices of the human brain, it leads to another hypothesis that there is a Bayesian interpretation underlying both logical reasoning and machine learning. In this paper, we introduce a generative model of logical consequence relations. It formalises the process of how the truth value of a sentence is probabilistically generated from the probability distribution over states of the world. We show that the generative model characterises a classical consequence relation, paraconsistent consequence relation and nonmonotonic consequence relation. In particular, the generative model gives a new consequence relation that outperforms them in reasoning with inconsistent knowledge. We also show that the generative model gives a new classification algorithm that outperforms several representative algorithms in predictive accuracy and complexity on the Kaggle Titanic dataset.
翻译:贝叶斯人最近在神经科学和人工智能方面方法的成功引出了一种假设,即大脑是贝叶斯机器。由于逻辑和学习是人类大脑的两种做法,它又引出另一种假设,即存在着一种贝叶斯解释,它既支持逻辑推理,又支持机器学习。在本文中,我们引入了逻辑后果关系的基因化模型。它正式确定了一个句子的真理值是如何从世界各邦的概率分布中概率生成的。我们表明,基因化模型具有一种古老的后果关系、同义后果关系和非分子后果关系的特点。特别是,基因化模型提供了一种新的后果关系,在逻辑推理中超越了这些解释,而知识却不一致。我们还表明,基因化模型提供了一种新的分类算法,它比卡格勒泰坦尼克数据集的预测准确性和复杂性的几种代表性算法要强得多。