Task-allocation is an important problem in multi-agent systems. It becomes more challenging when the team-members are humans with imperfect knowledge about their teammates' costs and the overall performance metric. While distributed task-allocation methods let the team-members engage in iterative dialog to reach a consensus, the process can take a considerable amount of time and communication. On the other hand, a centralized method that simply outputs an allocation may result in discontented human team-members who, due to their imperfect knowledge and limited computation capabilities, perceive the allocation to be unfair. To address these challenges, we propose a centralized Artificial Intelligence Task Allocation (AITA) that simulates a negotiation and produces a negotiation-aware task allocation that is fair. If a team-member is unhappy with the proposed allocation, we allow them to question the proposed allocation using a counterfactual. By using parts of the simulated negotiation, we are able to provide contrastive explanations that providing minimum information about other's costs to refute their foil. With human studies, we show that (1) the allocation proposed using our method does indeed appear fair to the majority, and (2) when a counterfactual is raised, explanations generated are easy to comprehend and convincing. Finally, we empirically study the effect of different kinds of incompleteness on the explanation-length and find that underestimation of a teammate's costs often increases it.
翻译:任务分配是多试剂系统中的一个重要问题。 当团队成员是人,对其团队成本和总体绩效衡量标准不完全了解的人时,任务分配就更具挑战性。分配任务分配方法让团队成员参与迭代对话以达成共识,但这一过程需要大量的时间和沟通。另一方面,一个仅仅产出分配的集中方法可能会导致不满的团队成员,他们由于其不完善的知识和有限的计算能力,认为分配是不公平的。为了应对这些挑战,我们提出集中的人工情报任务分配(AITA),模拟谈判并产生公平的谈判意识任务分配。如果团队成员对拟议的分配不满意,我们允许他们用一个反事实来质疑拟议的分配。通过模拟谈判的某些部分,我们可以提供对比性的解释,即提供最低限度的关于其他成本的信息来反驳他们的虚伪。通过人类研究,我们表明:(1) 使用我们的方法拟议的分配确实对多数人来说是公平的,并且当反事实性的解释常常产生反常的结果时,我们得出了不完全的解释。