The field of machine ethics is concerned with the question of how to embed ethical behaviors, or a means to determine ethical behaviors, into artificial intelligence (AI) systems. The goal is to produce artificial moral agents (AMAs) that are either implicitly ethical (designed to avoid unethical consequences) or explicitly ethical (designed to behave ethically). Van Wynsberghe and Robbins' (2018) paper Critiquing the Reasons for Making Artificial Moral Agents critically addresses the reasons offered by machine ethicists for pursuing AMA research; this paper, co-authored by machine ethicists and commentators, aims to contribute to the machine ethics conversation by responding to that critique. The reasons for developing AMAs discussed in van Wynsberghe and Robbins (2018) are: it is inevitable that they will be developed; the prevention of harm; the necessity for public trust; the prevention of immoral use; such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. In this paper, each co-author addresses those reasons in turn. In so doing, this paper demonstrates that the reasons critiqued are not shared by all co-authors; each machine ethicist has their own reasons for researching AMAs. But while we express a diverse range of views on each of the six reasons in van Wynsberghe and Robbins' critique, we nevertheless share the opinion that the scientific study of AMAs has considerable value.
翻译:机器伦理领域涉及如何将道德行为或道德行为确定手段纳入人工智能系统的问题。目的是产生隐含道德(旨在避免不道德后果)或明确道德(旨在道德行为)的人工道德代理人(AMAs ) 。Van Wynsberghe and Robins' (2018年) 论文《为人造道德代理人解释理由》批评了机器伦理学家为进行AMA研究而提出的理由;由机器伦理学家和评论家共同撰写的这份文件旨在通过回应这一批评来帮助机器伦理对话。在范韦恩斯贝赫和罗宾斯(2018年)中讨论的开发AMAs的理由是:发展这些代理人是不可避免的;预防伤害;公众信任的必要性;防止不道德使用;这些机器比人类更具有道德解释性,而建立这些机器将使人们更好地了解人类道德。在这个文件中,每个共同作者都通过回应这些理由。在这样做时,本文表明,在研究过程中,每个机构都具有批评性理由,但研究范围并非完全相同。