The Multi-valued Action Reasoning System (MARS) is an automated value-based ethical decision-making model for artificial agents (AI). Given a set of available actions and an underlying moral paradigm, by employing MARS one can identify the ethically preferred action. It can be used to implement and model different ethical theories, different moral paradigms, as well as combinations of such, in the context of automated practical reasoning and normative decision analysis. It can also be used to model moral dilemmas and discover the moral paradigms that result in the desired outcomes therein. In this paper, we give a condensed description of MARS, explain its uses, and comparatively place it in the existing literature.
翻译:多价值行动解释系统(MARS)是人工代理商的基于价值的自动道德决策模式(AI),根据一套现有的行动和基本的道德范式,利用MARS可以确定在道德上可取的行动,它可用于在自动化实际推理和规范性决策分析的背景下实施和模拟不同的道德理论、不同的道德范式以及这些理论的组合,也可以用于模拟道德困境,发现导致预期结果的道德范式。我们在本文件中简要描述MARS,解释其用途,并在现有文献中相对定位。