The rise of human-information systems, cybernetic systems, and increasingly autonomous systems requires the application of epistemic frameworks to machines and human-machine teams. This chapter discusses higher-order design principles to guide the design, evaluation, deployment, and iteration of Lethal Autonomous Weapons Systems (LAWS) based on epistemic models. Epistemology is the study of knowledge. Epistemic models consider the role of accuracy, likelihoods, beliefs, competencies, capabilities, context, and luck in the justification of actions and the attribution of knowledge. The aim is not to provide ethical justification for or against LAWS, but to illustrate how epistemological frameworks can be used in conjunction with moral apparatus to guide the design and deployment of future systems. The models discussed in this chapter aim to make Article 36 reviews of LAWS systematic, expedient, and evaluable. A Bayesian virtue epistemology is proposed to enable justified actions under uncertainty that meet the requirements of the Laws of Armed Conflict and International Humanitarian Law. Epistemic concepts can provide some of the apparatus to meet explainability and transparency requirements in the development, evaluation, deployment, and review of ethical AI.
翻译:人类信息系统、网络系统以及日益自主的系统的兴起要求将认知框架应用于机器和人机小组,本章讨论根据认知模型指导致命自主武器系统的设计、评价、部署和复制的更高层次的设计原则,本章讨论的是更高层次的设计原则,以指导致命自主武器系统的设计、评价、部署和复制;理论学是知识研究;理论模型考虑准确性、可能性、信仰、能力、能力、背景和运气在为行动和知识归属提供理由方面的作用;目的不是为法律辅助系统提供道德理由或与之对抗,而是说明如何与道德机器一起使用认知框架,以指导未来系统的设计和部署;本章讨论的模式旨在使第36条对法律辅助系统进行系统、方便和可评价的审查;提出了巴耶斯美德感学,以便能够在不确定性下采取符合武装冲突法和国际人道主义法要求的合理行动;理论概念可以提供一些机器,以满足发展、评价、部署和审查AI的道德要求。