This paper addresses the problem of synthesizing the behavior of an AI agent that provides proactive task assistance to a human in settings like factory floors where they may coexist in a common environment. Unlike in the case of requested assistance, the human may not be expecting proactive assistance and hence it is crucial for the agent to ensure that the human is aware of how the assistance affects her task. This becomes harder when there is a possibility that the human may neither have full knowledge of the AI agent's capabilities nor have full observability of its activities. Therefore, our \textit{proactive assistant} is guided by the following three principles: \textbf{(1)} its activity decreases the human's cost towards her goal; \textbf{(2)} the human is able to recognize the potential reduction in her cost; \textbf{(3)} its activity optimizes the human's overall cost (time/resources) of achieving her goal. Through empirical evaluation and user studies, we demonstrate the usefulness of our approach.
翻译:本文探讨了综合一个向人类提供主动任务援助的AI代理机构的行为的问题,该代理机构在像工厂楼层这样的环境中向人类提供主动任务援助,他们可以在共同环境中共存。与所请求的援助不同,人类可能并不期望得到主动援助,因此,对于该代理机构来说,确保人类了解援助如何影响她的任务至关重要。当人类可能既不充分了解AI代理机构的能力,也不完全了解其活动时,情况就变得更为困难了。因此,我们的\textit{proactactact 助理}遵循以下三项原则:\ textbf{(1)}其活动降低了人类实现她目标的成本;\ textbf{(2)}人类能够认识到其成本的潜在减少;\ textbf{(3)}其活动优化了人类实现她目标的总体成本(时间/资源)。通过经验评估和用户研究,我们展示了我们的方法的效用。