When we consult with a doctor, lawyer, or financial advisor, we generally assume that they are acting in our best interests. But what should we assume when it is an artificial intelligence (AI) system that is acting on our behalf? Early examples of AI assistants like Alexa, Siri, Google, and Cortana already serve as a key interface between consumers and information on the web, and users routinely rely upon AI-driven systems like these to take automated actions or provide information. Superficially, such systems may appear to be acting according to user interests. However, many AI systems are designed with embedded conflicts of interests, acting in ways that subtly benefit their creators (or funders) at the expense of users. To address this problem, in this paper we introduce the concept of AI loyalty. AI systems are loyal to the degree that they are designed to minimize, and make transparent, conflicts of interest, and to act in ways that prioritize the interests of users. Properly designed, such systems could have considerable functional and competitive - not to mention ethical - advantages relative to those that do not. Loyal AI products hold an obvious appeal for the end-user and could serve to promote the alignment of the long-term interests of AI developers and customers. To this end, we suggest criteria for assessing whether an AI system is sufficiently transparent about conflicts of interest, and acting in a manner that is loyal to the user, and argue that AI loyalty should be considered during the technological design process alongside other important values in AI ethics such as fairness, accountability privacy, and equity. We discuss a range of mechanisms, from pure market forces to strong regulatory frameworks, that could support incorporation of AI loyalty into a variety of future AI systems.
翻译:当我们咨询医生、律师或金融顾问时,我们一般认为他们的行为符合我们的最佳利益。但是,当一个代表我们行事的人工智能(AI)系统(AI)代表我们行事时,我们应该假设什么?AI助理,如Alexa、Siri、Google和Cortana的早期例子已经是消费者与网上信息之间的关键接口,用户通常依靠AI驱动的系统来采取自动化行动或提供信息。在设计上,这类系统似乎符合用户的利益。然而,许多AI系统的设计都带有内在的利益冲突,其方式是以牺牲用户的利益为创建者(或供资者)的利益。为了解决这个问题,我们在本文中引入了AI忠诚的概念。AI系统忠于消费者和网上信息,其设计以尽量减少、透明、利益冲突,并以优先的方式行事,从而优先考虑用户的利益。 正确设计这样的系统,其功能和竞争性――更不用说道德――相对于不道德的优势。 AI产品对终端用户(或供资者)的创建者(或供资者)的利益有着明显的吸引力。