In operations of multi-agent teams ranging from homogeneous robot swarms to heterogeneous human-autonomy teams, unexpected events might occur. While efficiency of operation for multi-agent task allocation problems is the primary objective, it is essential that the decision-making framework is intelligent enough to manage unexpected task load with limited resources. Otherwise, operation effectiveness would drastically plummet with overloaded agents facing unforeseen risks. In this work, we present a decision-making framework for multi-agent teams to learn task allocation with the consideration of load management through decentralized reinforcement learning, where idling is encouraged and unnecessary resource usage is avoided. We illustrate the effect of load management on team performance and explore agent behaviors in example scenarios. Furthermore, a measure of agent importance in collaboration is developed to infer team resilience when facing handling potential overload situations.
翻译:在多试剂小组的运作中,从同质机器人群到各式各样的人类自主小组,可能会发生意外事件。虽然多剂任务分配问题的运作效率是首要目标,但至关重要的是,决策框架必须足够聪明,足以以有限的资源管理意外任务负荷;否则,由于超负荷的代理人面临无法预见的风险,操作效率会急剧下降。在这项工作中,我们为多剂小组提供了一个决策框架,以学习任务分配,同时通过分散的强化学习来考虑负荷管理,鼓励闲置,避免不必要地使用资源。我们举例说明了负载管理对团队业绩的影响,并探索了代理人在实例中的行为。此外,还制定了一种代理人在合作中的重要性尺度,以便在处理潜在的超载情况时推断团队的复原力。