To facilitate effective human-robot interaction (HRI), trust-aware HRI has been proposed, wherein the robotic agent explicitly considers the human's trust during its planning and decision making. The success of trust-aware HRI depends on the specification of a trust dynamics model and a trust-behavior model. In this study, we proposed one novel trust-behavior model, namely the reverse psychology model, and compared it against the commonly used disuse model. We examined how the two models affect the robot's optimal policy and the human-robot team performance. Results indicate that the robot will deliberately "manipulate" the human's trust under the reverse psychology model. To correct this "manipulative" behavior, we proposed a trust-seeking reward function that facilitates trust establishment without significantly sacrificing the team performance.
翻译:为促进有效的人类-机器人互动(HRI),提出了具有信任意识的HRI建议,其中机器人代理在其规划和决策过程中明确考虑人类的信任。具有信任意识的HRI的成功取决于信任动态模型和信任行为模型的规格。在这项研究中,我们提出了一个新的信任行为模式,即反向心理学模型,并将其与常用的不使用模型进行比较。我们研究了这两种模型如何影响机器人的最佳政策和人类机器人团队的绩效。结果显示机器人将故意“控制”反向心理学模型下的人类信任。为了纠正这种“管理”行为,我们提议了一种寻求信任的奖励功能,促进信任的建立,而不会大大牺牲团队的绩效。