The human-agent team, which is a problem in which humans and autonomous agents collaborate to achieve one task, is typical in human-AI collaboration. For effective collaboration, humans want to have an effective plan, but in realistic situations, they might have difficulty calculating the best plan due to cognitive limitations. In this case, guidance from an agent that has many computational resources may be useful. However, if an agent guides the human behavior explicitly, the human may feel that they have lost autonomy and are being controlled by the agent. We therefore investigated implicit guidance offered by means of an agent's behavior. With this type of guidance, the agent acts in a way that makes it easy for the human to find an effective plan for a collaborative task, and the human can then improve the plan. Since the human improves their plan voluntarily, he or she maintains autonomy. We modeled a collaborative agent with implicit guidance by integrating the Bayesian Theory of Mind into existing collaborative-planning algorithms and demonstrated through a behavioral experiment that implicit guidance is effective for enabling humans to maintain a balance between improving their plans and retaining autonomy.
翻译:人类代理人团队是人类和自主代理人合作完成一项任务的一个问题,在人类-AI合作中是典型的问题。为了有效合作,人类希望有一个有效的计划,但在现实情况下,由于认知上的局限性,他们可能难以计算出最佳计划。在这种情况下,具有多种计算资源的代理人的指导可能有用。但是,如果一个代理人明确指导人类行为,人类可能感到他们已经失去自主权,并受到代理人的控制。因此,我们调查了通过代理人行为提供的隐性指导。有了这种指导,该代理人的行动就使得人类很容易找到一个有效的合作任务计划,然后人类就可以改进计划。由于人类自愿改进他们的计划,他或她可以保持自主。我们制作了一个协作代理人,通过将贝耶斯思想理论纳入现有的协作规划算法,并通过行为实验证明隐性指导对于使人类在改进计划和保持自主之间保持平衡是有效的。