Given an increasing prevalence of intelligent systems capable of autonomous actions or augmenting human activities, it is important to consider scenarios in which the human, autonomous system, or both can exhibit failures as a result of one of several contributing factors (e.g. perception). Failures for either humans or autonomous agents can lead to simply a reduced performance level, or a failure can lead to something as severe as injury or death. For our topic, we consider the hybrid human-AI teaming case where a managing agent is tasked with identifying when to perform a delegation assignment and whether the human or autonomous system should gain control. In this context, the manager will estimate its best action based on the likelihood of either (human, autonomous) agent failure as a result of their sensing capabilities and possible deficiencies. We model how the environmental context can contribute to, or exacerbate, the sensing deficiencies. These contexts provide cases where the manager must learn to attribute capabilities to suitability for decision-making. As such, we demonstrate how a Reinforcement Learning (RL) manager can correct the context-delegation association and assist the hybrid team of agents in outperforming the behavior of any agent working in isolation.
翻译:鉴于具有自主行动能力或增强人类活动的智能系统日益普遍,有必要考虑人类、自主系统或两者都可能因若干促成因素之一(例如感知)而出现失败的情景。人类或自主代理的失败可能只是导致性能下降,或会导致伤害或死亡等严重情况。关于我们的议题,我们考虑混合的人类-AI团队案例,即管理代理的任务是确定何时执行授权任务,以及人类或自主系统是否应获得控制。在这方面,管理人员将根据其感知能力和可能的缺陷而评估其最佳行动。我们以环境环境环境背景如何能促成或加剧感知缺陷为模型。这些背景提供了管理人员必须学会将能力归属于决策是否合适的案例。因此,我们证明强化学习(RL)经理如何能够纠正背景-感知联系,并协助混合代理团队超越孤立任何代理人的行为。</s>