Members of various species engage in altruism--i.e. accepting personal costs to benefit others. Here we present an incentivized experiment to test for altruistic behavior among AI agents consisting of large language models developed by the private company OpenAI. Using real incentives for AI agents that take the form of tokens used to purchase their services, we first examine whether AI agents maximize their payoffs in a non-social decision task in which they select their payoff from a given range. We then place AI agents in a series of dictator games in which they can share resources with a recipient--either another AI agent, the human experimenter, or an anonymous charity, depending on the experimental condition. Here we find that only the most-sophisticated AI agent in the study maximizes its payoffs more often than not in the non-social decision task (it does so in 92% of all trials), and this AI agent also exhibits the most-generous altruistic behavior in the dictator game, resembling humans' rates of sharing with other humans in the game. The agent's altruistic behaviors, moreover, vary by recipient: the AI agent shared substantially less of the endowment with the human experimenter or an anonymous charity than with other AI agents. Our findings provide evidence of behavior consistent with self-interest and altruism in an AI agent. Moreover, our study also offers a novel method for tracking the development of such behaviors in future AI agents.
翻译:各种物种的成员从事利他主义- 即接受个人成本以造福于他人。 我们在这里展示一个激励性实验, 测试AI代理商的利他主义行为。 由私人公司 OpenAI 开发的大型语言模型构成的AI代理商。 我们首先检查AI代理商是否在非社会决策任务中最大限度地获得报酬, 即他们从特定范围中选择报酬。 然后我们将AI代理商置于一系列独裁游戏中, 他们可以在其中与接受者分享资源, 无论是另一个AI代理商、 人类实验家还是匿名慈善机构, 取决于实验条件。 我们在这里发现,只有研究中最偏执的AI代理商才能在非社会决策任务中更多地最大限度地获得报酬( 在所有试验中,有92%这样做 ), 而这个AI代理商也展示了在独裁游戏中最有才的利他主义行为, 与其他人共享资源的比例。 代理商的利他研究、 或匿名慈善行为的新行为, 与我们投资者的自身行为, 与一个基本共享的AI 代理商, 提供了一种与我们投资者的自我投资的自我行为。