We investigate the use of a multi-agent multi-armed bandit (MA-MAB) setting for modeling repeated Cournot oligopoly games, where the firms acting as agents choose from the set of arms representing production quantity (a discrete value). Agents interact with separate and independent bandit problems. In this formulation, each agent makes sequential choices among arms to maximize its own reward. Agents do not have any information about the environment; they can only see their own rewards after taking an action. However, the market demand is a stationary function of total industry output, and random entry or exit from the market is not allowed. Given these assumptions, we found that an $\epsilon$-greedy approach offers a more viable learning mechanism than other traditional MAB approaches, as it does not require any additional knowledge of the system to operate. We also propose two novel approaches that take advantage of the ordered action space: $\epsilon$-greedy+HL and $\epsilon$-greedy+EL. These new approaches help firms to focus on more profitable actions by eliminating less profitable choices and hence are designed to optimize the exploration. We use computer simulations to study the emergence of various equilibria in the outcomes and do the empirical analysis of joint cumulative regrets.
翻译:我们调查使用多试剂多武装盗匪(MA-MAB)设置来模拟反复的Cournot 寡头寡头游戏,由作为代理商的公司从代表生产数量(一个独立的价值)的一套武器中选择出,代理人与独立和独立的盗匪问题相互作用。在这一配方中,每个代理人在武器之间作出顺序选择,以获得最大的报酬。代理人对环境没有任何了解;他们只有在采取行动后才能看到自己的回报。然而,市场需求是工业总产出的固定功能,不允许随意进入或退出市场。根据这些假设,我们发现,美元-greedy方法比其他传统的MAB方法更可行的学习机制,因为它不需要对系统的运作有任何额外知识。我们还提出了两个新的方法,利用定购行动空间:$\epsilon$-greedy+HL和$\epsilon-greedy+EL。这些新方法有助于企业通过消除利润较少的选择和在计算机中进行各种模拟,从而实现最佳的实验结果。