It is widely known how the human ability to cooperate has influenced the thriving of our species. However, as we move towards a hybrid human-machine future, it is still unclear how the introduction of AI agents in our social interactions will affect this cooperative capacity. Within the context of the one-shot collective risk dilemma, where enough members of a group must cooperate in order to avoid a collective disaster, we study the evolutionary dynamics of cooperation in a hybrid population made of both adaptive and fixed-behavior agents. Specifically, we show how the first learn to adapt their behavior to compensate for the behavior of the latter. The less the (artificially) fixed agents cooperate, the more the adaptive population is motivated to cooperate, and vice-versa, especially when the risk is higher. By pinpointing how adaptive agents avoid their share of costly cooperation if the fixed-behavior agents implement a cooperative policy, our work hints towards an unbalanced hybrid world. On one hand, this means that introducing cooperative AI agents within our society might unburden human efforts. Nevertheless, it is important to note that costless artificial cooperation might not be realistic, and more than deploying AI systems that carry the cooperative effort, we must focus on mechanisms that nudge shared cooperation among all members in the hybrid system.
翻译:众所周知,人类合作能力如何影响我们物种的蓬勃发展。然而,随着我们走向人类机械混合的未来,人们仍不清楚在社会互动中引入AI代理将如何影响我们的合作能力。在一线集体风险困境的背景下,一个团体的足够成员必须合作以避免集体灾难,我们研究在由适应性和固定行为代理人组成的混合人口中开展合作的演进动态。具体地说,我们展示了第一个学会如何调整其行为以补偿后者的行为。(人工)固定代理人合作越少,适应性人口就越有合作的动机,反之亦然,特别是在风险更大的情况下。通过确定适应性代理人如何避免在固定行为代理人实施合作政策的情况下进行代价高昂的合作份额,我们的工作提示着一个不平衡的混合世界。一方面,这意味着在我们社会中引入合作性AI代理可能会减轻人类努力的负担。然而,重要的是要指出,在混合合作体系中,我们不得不集中关注适应性代理人如何避免分担费用。