Intelligent agents powered by AI planning assist people in complex scenarios, such as managing teams of semi-autonomous vehicles. However, AI planning models may be incomplete, leading to plans that do not adequately meet the stated objectives, especially in unpredicted situations. Humans, who are apt at identifying and adapting to unusual situations, may be able to assist planning agents in these situations by encoding their knowledge into a planner at run-time. We investigate whether people can collaborate with agents by providing their knowledge to an agent using linear temporal logic (LTL) at run-time without changing the agent's domain model. We presented 24 participants with baseline plans for situations in which a planner had limitations, and asked the participants for workarounds for these limitations. We encoded these workarounds as LTL constraints. Results show that participants' constraints improved the expected return of the plans by 10% ($p < 0.05$) relative to baseline plans, demonstrating that human insight can be used in collaborative planning for resilience. However, participants used more declarative than control constraints over time, but declarative constraints produced plans less similar to the expectation of the participants, which could lead to potential trust issues.
翻译:由AI规划所授权的智能代理机构,如半自主车辆管理团队等,在复杂的情况下协助人员。然而,AI规划模式可能不完整,导致计划不能充分满足既定目标,特别是在未预知的情况下。 人类如果能够识别和适应异常情况,也许能够通过在运行时将其知识编成计划者来协助规划人员应对这些情况。 我们调查人们是否可以在运行时使用线性时间逻辑(LTL)向代理机构提供知识,同时不改变代理人的域模型。 我们向24名与会者介绍了规划人员有局限性的情况的基准计划,并要求参与者为这些局限性制定变通办法。我们将这些变通办法编码为LTL的限制。结果显示,参与者的制约因素使计划的预期回报比基线计划提高了10%(p p < 0.05美元),表明在合作规划中可以使用人类的洞察力,而不必改变代理人的域模型。然而,参与者在时间上使用更多的宣示性制约,但宣示性制约产生的计划与参与者的期望不太相似,这可能导致潜在的信任问题。