The formalization of action and obligation using logic languages is a topic of increasing relevance in the field of ethics for AI. Having an expressive syntactic and semantic framework to reason about agents' decisions in moral situations allows for unequivocal representations of components of behavior that are relevant when assigning blame (or praise) of outcomes to said agents. Two very important components of behavior in this respect are belief and belief-based action. In this work we present a logic of doxastic oughts by extending epistemic deontic stit theory with beliefs. On one hand, the semantics for formulas involving belief operators is based on probability measures. On the other, the semantics for doxastic oughts relies on a notion of optimality, and the underlying choice rule is maximization of expected utility. We introduce an axiom system for the resulting logic, and we address its soundness, completeness, and decidability results. These results are significant in the line of research that intends to use proof systems of epistemic, doxastic, and deontic logics to help in the testing of ethical behavior of AI through theorem-proving and model-checking.
翻译:运用逻辑语言将行动和义务正规化是AI道德领域一个日益重要的主题。 具有一种表达式的合成和语义框架,以解释道德情况下代理人的决定。 在道德情况下对代理人的决定进行解释时,能够明确表达在将结果归责(或表扬)给上述代理人时具有相关性的行为组成部分。 这方面的行为的两个非常重要的组成部分是信仰和基于信仰的行动。在这项工作中,我们通过扩展具有信仰的共认、共记和分解理论,提出了一种多维主义逻辑逻辑。一方面,涉及信仰经营者的公式的语义以概率计量为基础。 另一方面,对强制论的语义应该基于一种最佳性概念,而基本的选择规则是预期效用的最大化。我们为由此产生的逻辑引入了一种反共性体系,我们处理其正确性、完整性和可分解性结果。这些结果对于打算使用共认、多维主义和解论等证据系统来帮助测试AI道德行为的研究范围非常重要。