Conversational Artificial Intelligence (AI) used in industry settings can be trained to closely mimic human behaviors, including lying and deception. However, lying is often a necessary part of negotiation. To address this, we develop a normative framework for when it is ethical or unethical for a conversational AI to lie to humans, based on whether there is what we call "invitation of trust" in a particular scenario. Importantly, cultural norms play an important role in determining whether there is invitation of trust across negotiation settings, and thus an AI trained in one culture may not be generalizable to others. Moreover, individuals may have different expectations regarding the invitation of trust and propensity to lie for human vs. AI negotiators, and these expectations may vary across cultures as well. Finally, we outline how a conversational chatbot can be trained to negotiate ethically by applying autoregressive models to large dialog and negotiations datasets.
翻译:工业环境中使用的交流人工智能(AI)可以被训练来密切模仿人类行为,包括谎言和欺骗。然而,说谎往往是谈判的必要部分。为了解决这个问题,我们根据在特定情况下是否存在我们所谓的“信任激励”的情景,为对话性人工智能对人类撒谎的道德或不道德程度制定了规范框架。重要的是,文化规范在确定是否存在跨谈判场合的信任邀请方面起着重要作用,因此,在一种文化中受过培训的人工智能可能无法普及到其他人。此外,个人对于信任邀请和人类与AI谈判者之间撒谎的倾向可能有不同的期望,而这些期望也可能因文化的不同而不同。 最后,我们概述了如何通过对大型对话和谈判数据集应用自动反向模式,对对话性聊天式对话进行伦理性谈判的培训。