The recent rapid advancements in artificial intelligence research and deployment have sparked more discussion about the potential ramifications of socially- and emotionally-intelligent AI. The question is not if research can produce such affectively-aware AI, but when it will. What will it mean for society when machines -- and the corporations and governments they serve -- can "read" people's minds and emotions? What should developers and operators of such AI do, and what should they not do? The goal of this article is to pre-empt some of the potential implications of these developments, and propose a set of guidelines for evaluating the (moral and) ethical consequences of affectively-aware AI, in order to guide researchers, industry professionals, and policy-makers. We propose a multi-stakeholder analysis framework that separates the ethical responsibilities of AI Developers vis-\`a-vis the entities that deploy such AI -- which we term Operators. Our analysis produces two pillars that clarify the responsibilities of each of these stakeholders: Provable Beneficence, which rests on proving the effectiveness of the AI, and Responsible Stewardship, which governs responsible collection, use, and storage of data and the decisions made from such data. We end with recommendations for researchers, developers, operators, as well as regulators and law-makers.
翻译:最近人工智能研究和部署的迅速进展引发了关于社会和心理智能AI潜在影响的更多讨论。 问题不是研究能否产生这种有知觉的AI,而是何时会产生。当机器 -- -- 以及他们所服务的公司和政府 -- -- 能够“读”人们的思想和情感时,这对社会意味着什么?这种AI的开发者和操作者应该做什么,他们应该做什么? 本条的目的是预先排除这些发展可能产生的影响,提出一套准则,用于评价具有真知灼见的AI的(道德和)道德后果,以指导研究人员、行业专业人员和决策者。我们提出了一个多利益攸关方分析框架,将AI开发者的道德责任与部署这种AI的实体 -- -- 我们称之为操作者 -- -- 的道德责任区分开来。我们的分析产生了两个支柱,明确了每个利益攸关方的责任:可察觉的Beneific,这取决于证明AI的有效性,以及负责任的管理AI的(道德和)道德后果,从而指导研究人员、行业专业人员和决策者。我们提出了一个多利益攸关方分析框架,将AI开发者与数据和决定的制定者、我们作为最终的研究人员。