Current advances in AI and its applicability have highlighted the need to ensure its trustworthiness for legal, ethical, and even commercial reasons. Sub-symbolic machine learning algorithms, such as the LLMs, simulate reasoning but hallucinate and their decisions cannot be explained or audited (crucial aspects for trustworthiness). On the other hand, rule-based reasoners, such as Cyc, are able to provide the chain of reasoning steps but are complex and use a large number of reasoners. We propose a middle ground using s(CASP), a goal-directed constraint-based answer set programming reasoner that employs a small number of mechanisms to emulate reliable and explainable human-style commonsense reasoning. In this paper, we explain how s(CASP) supports the 16 desiderata for trustworthy AI introduced by Doug Lenat and Gary Marcus (2023), and two additional ones: inconsistency detection and the assumption of alternative worlds. To illustrate the feasibility and synergies of s(CASP), we present a range of diverse applications, including a conversational chatbot and a virtually embodied reasoner.
翻译:当前人工智能的发展及其适用性凸显了出于法律、伦理乃至商业原因确保其可信度的必要性。亚符号机器学习算法(如大语言模型)能够模拟推理过程,但存在幻觉问题,且其决策无法被解释或审计(这是可信度的关键方面)。另一方面,基于规则的推理系统(如Cyc)能够提供推理步骤链,但结构复杂且需使用大量推理器。我们提出一种折中方案,采用s(CASP)——一种基于目标导向约束的答案集编程推理器,它通过少量机制模拟可靠且可解释的人类风格常识推理。本文阐述了s(CASP)如何支持Doug Lenat与Gary Marcus(2023)提出的可信人工智能16项需求,并补充两项新需求:不一致性检测与替代世界假设。为展示s(CASP)的可行性与协同效应,我们呈现了包括对话聊天机器人与虚拟具身推理器在内的多种应用案例。