This work studies the ability of a third-party influencer to control the behavior of a multi-agent system. The controller exerts actions with the goal of guiding agents to attain target joint strategies. Under mild assumptions, this can be modeled as a Markov decision problem and solved to find a control policy. This setup is refined by introducing more degrees of freedom to the control; the agents are partitioned into disjoint clusters such that each cluster can receive a unique control. Solving for a cluster-based policy through standard techniques like value iteration or policy iteration, however, takes exponentially more computation time due to the expanded action space. A solution is presented in the Clustered Value Iteration algorithm, which iteratively solves for an optimal control via a round robin approach across the clusters. CVI converges exponentially faster than standard value iteration, and can find policies that closely approximate the MDP's true optimal value. For MDPs with separable reward functions, CVI will converge to the true optimum. While an optimal clustering assignment is difficult to compute, a good clustering assignment for the agents may be found with a greedy splitting algorithm, whose associated values form a monotonic, submodular lower bound to the values of optimal clusters. Finally, these control ideas are demonstrated on simulated examples.
翻译:这项工作研究第三方影响者控制多试剂系统行为的能力。 控制者为了指导代理人实现目标的联合战略而执行行动, 目的是指导代理人实现目标的联合战略。 在轻度假设下, 这可以作为Markov 决策问题模型, 并解决控制政策。 这个设置通过引入更多程度的自由控制来完善; 代理器被分割成不相连的组群, 这样每个组群就能得到独特的控制。 但是, 控制者通过价值迭代或政策迭代等标准技术来解决基于集群的政策, 但由于行动空间的扩大, 需要大量的时间来计算。 集成值转换算法显示一种解决方案, 通过集群的圆盘套式方法反复解决最佳控制问题。 CVI 与标准值转换速度快于标准值迭代, 并能找到与MDP真正最佳值相近的政策。 对于具有分解奖励功能的 MDP, CVI 会与真正的最佳组合分配方法一致。 虽然最理想的组合任务很难计算, 但对于代理人来说, 一种良好的组合分配方法可能是与贪婪的混合组合算法, 其最终的模型模型模型模型显示的模型模型模式。