In multi-agent systems, intelligent agents are tasked with making decisions that have optimal outcomes when the actions of the other agents are as expected, whilst also being prepared for unexpected behaviour. In this work, we introduce a new risk-averse solution concept that allows the learner to accommodate unexpected actions by finding the minimum variance strategy given any level of expected return. We prove the existence of such a risk-averse equilibrium, and propose one fictitious-play type learning algorithm for smaller games that enjoys provable convergence guarantees in certain games classes (e.g., zero-sum or potential). Furthermore, we propose an approximation method for larger games based on iterative population-based training that generates a population of risk-averse agents. Empirically, our equilibrium is shown to be able to reduce the reward variance, specifically in the sense that off-equilibrium behaviour has a far smaller impact on our risk-averse agents in comparison to playing other equilibrium solutions. Importantly, we show that our population of agents that approximate a risk-averse equilibrium is particularly effective in the presence of unseen opposing populations, especially in the case of guaranteeing a minimal level of performance which is critical to safety-aware multi-agent systems.
翻译:在多试剂系统中,智能代理机构的任务是在预期其他代理机构的行动达到预期效果时做出具有最佳结果的决策,同时为出乎意料的行为做好准备。在这项工作中,我们引入了新的风险反向解决方案概念,使学习者能够根据任何预期回报水平找到最低差异战略,从而适应出乎意料的行动。我们证明存在这种风险反平衡,并为在某些游戏类别(如零和或潜在)享有可察觉的趋同保证的小型游戏提出一种虚构游戏式学习算法。此外,我们提出一种基于迭代人口培训的大型游戏近似方法,以产生大量风险反动剂。 随机性地,我们的平衡性证明能够减少奖励差异,具体地说,即与其它平衡解决方案相比,离平衡性的行为对我们的风险反动剂的影响要小得多。 重要的是,我们表明,我们那些接近风险反平衡的代理群体在隐形对抗人群的存在中特别有效,特别是在保证最低水平的性能对于安全性能多试系统至关重要的情况下。