We consider the control of decentralized learning dynamics for agents in an anti-coordination network game. In the anti-coordination network game, there is a preferred action in the absence of neighbors' actions, and the utility an agent receives from the preferred action decreases as more of its neighbors select the preferred action, potentially causing the agent to select a less desirable action. The decentralized dynamics that is based on the iterated elimination of dominated strategies converge for the considered game. Given a convergent action profile, we measure anti-coordination by the number of edges in the underlying graph that have at least one agent in either end of the edge not taking the preferred action. The maximum anti-coordination (MAC) problem seeks to find an optimal set of agents to control under a finite budget so that the overall network disconnect is maximized on game convergence as a result of the dynamics. We show that the MAC is submodular in expectation in dense bipartite networks for any realization of the utility constants in the population. Utilizing this result, we obtain a performance guarantee for the greedy agent selection algorithm for MAC. Finally, we provide a computational study to show the effectiveness of greedy node selection strategies to solve MAC on general bipartite networks.
翻译:我们考虑在反协调网络游戏中控制代理人的分散学习动态。在反协调网络游戏中,在没有邻国行动的情况下有优先行动,反协调网络游戏有优先行动,而代理人从优先行动中得到的效用则减少,因为更多的邻国选择了优先行动,可能导致代理人选择不那么理想的行动。基于对主导战略的迭代消除的分散动态,是经过考虑的游戏的汇合。鉴于一个趋同的行动配置,我们用底图中至少有一个代理人处于边缘两端而没有采取优先行动的边缘数来衡量反协调。最高反协调(MAC)问题力求在有限的预算下找到一套最佳的代理人来控制,这样,整个网络的断开就会因动态而使游戏的趋同最大化。我们表明,在密集的两边网络中,对实现人口中的效用常数抱有小的期待。利用这一结果,我们获得一个业绩保证,即为MAC公司贪婪代理选择算法,最后,我们提供一项计算研究,以显示贪婪的通用MAC网络选择战略的有效性。