It is not scalable for assistive robotics to have all functionalities pre-programmed prior to user introduction. Instead, it is more realistic for agents to perform supplemental on site learning. This opportunity to learn user and environment particularities is especially helpful for care robots that assist with individualized caregiver activities in residential or nursing home environments. Many assistive robots, ranging in complexity from Roomba to Pepper, already conduct some of their learning in the home, observable to the user. We lack an understanding of how witnessing this learning impacts the user. Thus, we propose to assess end-user attitudes towards the concept of embodied robots that conduct some learning in the home as compared to robots that are delivered fully-capable. In this virtual, between-subjects study, we recruit end users (care-givers and care-takers) from nursing homes, and investigate user trust in three different domains: navigation, manipulation, and preparation. Informed by the first study where we identify agent learning as a key factor in determining trust, we propose a second study to explore how to modulate that trust. This second, in-person study investigates the effectiveness of apologies, explanations of robot failure, and transparency of learning at improving trust in embodied learning robots.
翻译:辅助机器人在用户引入之前,将所有功能都预先编成程序,这是无法推广的。相反,对于代理机构来说,在现场学习补充性学习更为现实。学习用户和环境特性的机会对护理机器人特别有用,因为护理机器人协助在住宅或护理家庭环境中开展个性化护理人员活动。许多协助机器人,从Croomba到Pepper等复杂到Peper,已经在家中进行一些学习,用户可以观察。我们不理解目睹这种学习如何影响用户。因此,我们提议评估最终用户对内装机器人概念的态度,该机器人在家里进行一些学习,而机器人则完全能够提供。在这个虚拟的学科间研究中,我们从疗养院招募最终用户(护理者和护理者),调查三个不同领域的用户信任:导航、操作和准备。我们通过第一项研究发现代理人学习是确定信任的一个关键因素,我们建议进行第二次研究,探讨如何调整信任。第二,由个人研究研究在机器人学习如何体现信任方面,以道歉、解释失败和透明度。